diff --git a/README.md b/README.md
index f56e441..c59df7e 100644
--- a/README.md
+++ b/README.md
@@ -1,2 +1,184 @@
-# PointCloud-C
-Benchmarking and Analyzing Point Cloud Robustness under Corruptions
+
+
+
+
+
+ Benchmarking and Analyzing Point Cloud Robustness under Corruptions
+
+ Jiawei Ren,
+ Lingdong Kong,
+ Liang Pan,
+ Ziwei Liu
+
+ S-Lab, Nanyang Technological University
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+## About
+
+PointCloud-C is the very first test-suite for point cloud robustness analysis under corruptions. It includes two sets: [ModelNet-C](https://arxiv.org/abs/2202.03377) (ICML'22) for point cloud classification and [ShapeNet-C]() (arXiv'22) for part segmentation.
+
+
+
+
+
+ Fig. Examples of point cloud corruptions in PointCloud-C.
+
+
+
+Visit our project page to explore more details. π±
+
+
+## Updates
+
+- \[2022.06\] - PointCloud-C is now live on [Paper-with-Code](https://paperswithcode.com/dataset/pointcloud-c). Join the benchmark today!
+- \[2022.06\] - The 1st PointCloud-C challenge will be hosted in conjecture with the ECCV'22 [SenseHuman](https://sense-human.github.io/) workshop. π
+- \[2022.06\] - We are organizing the 1st PointCloud-C challenge! Click [here](https://pointcloud-c.github.io/competition.html) to explore the competition details.
+- \[2022.05\] - ModelNet-C is accepted to ICML 2022. Click here to check it out! π
+
+
+## Overview
+
+- [Data Preparation](docs/DATA_PREPARE.md)
+- [Getting Started](docs/GET_STARTED.md)
+- [Benchmark Results](#benchmark-results)
+- [Evaluation](#evaluation)
+- [Customize Evaluation](#customize-evaluation)
+- [Build PointCloud-C](#build-pointcloud-c)
+- [TODO List](#todo-list)
+- [License](#license)
+- [Acknowledgement](#acknowledgement)
+- [Citation](#citation)
+
+
+## Data Preparation
+Please refer to [DATA_PREPARE.md](docs/DATA_PREPARE.md) for the details to prepare the ModelNet-C and ShapeNet-C datasets.
+
+
+## Getting Started
+Please refer to [GET_STARTED.md](docs/GET_STARTED.md) to learn more usage about this codebase.
+
+
+## Benchmark Results
+
+#### ModelNet-C (Classification)
+
+| Method | Reference | Standalone | mCE | Clean OA |
+| --------------- | ---------------------------------------------------------- | :--------: | :---: | :------: |
+| DGCNN | [Wang et al.](https://arxiv.org/abs/1801.07829) | Yes | 1.000 | 0.926 |
+| PointNet | [Qi et al.](https://arxiv.org/abs/1612.00593) | Yes | 1.422 | 0.907 |
+| PointNet++ | [Qi et al.](https://arxiv.org/abs/1706.02413) | Yes | 1.072 | 0.930 |
+| RSCNN | [Liu et al.](https://arxiv.org/abs/1904.07601) | Yes | 1.130 | 0.923 |
+| SimpleView | [Goyal et al.](https://arxiv.org/abs/2106.05304) | Yes | 1.047 | 0.939 |
+| GDANet | [Xu et al.](https://arxiv.org/abs/2012.10921) | Yes | 0.892 | 0.934 |
+| CurveNet | [Xiang et al.](https://arxiv.org/abs/2105.01288) | Yes | 0.927 | 0.938 |
+| PAConv | [Xu et al.](https://arxiv.org/abs/2103.14635) | Yes | 1.104 | 0.936 |
+| PCT | [Guo et al.](https://arxiv.org/abs/2012.09688) | Yes | 0.925 | 0.930 |
+| RPC | [Ren et al.](https://arxiv.org/abs/2202.03377) | Yes | 0.863 | 0.930 |
+| DGCNN+PointWOLF | [Kim et al.](https://arxiv.org/abs/2110.05379) | No | 0.814 | 0.926 |
+| DGCNN+RSMix | [Lee et al.](https://arxiv.org/abs/2102.01929) | No | 0.745 | 0.930 |
+| DGCNN+WOLFMix | [Ren et al.](https://arxiv.org/abs/2202.03377) | No | 0.590 | 0.932 |
+| GDANet+WOLFMix | [Ren et al.](https://arxiv.org/abs/2202.03377) | No | 0.571 | 0.934 |
+
+#### ShapeNet-C (Part Segmentation)
+
+| Method | Reference | Standalone | mCE | mRCE | mIoU |
+| ----------------- | ---------------------------------------------------------- | :--------: | :---: | :------: | :---: |
+| DGCNN | [Wang et al.](https://arxiv.org/abs/1801.07829) | Yes | 1.000 | 1.000 | 0.852 |
+| PointNet | [Qi et al.](https://arxiv.org/abs/1612.00593) | Yes | 1.178 | 1.056 | 0.833 |
+| PointNet++ | [Qi et al.](https://arxiv.org/abs/1706.02413) | Yes | 1.112 | 1.850 | 0.857 |
+| OcCo-DGCNN | [Wang et al.](https://arxiv.org/abs/2010.01089) | No | 0.977 | 0.804 | 0.851 |
+| OcCo-PointNet | [Wang et al.](https://arxiv.org/abs/2010.01089) | No | 1.130 | 0.937 | 0.832 |
+| OcCo-PCN | [Wang et al.](https://arxiv.org/abs/2010.01089) | No | 1.173 | 0.882 | 0.815 |
+| GDANet | [Xu et al.](https://arxiv.org/abs/2012.10921) | Yes | 0.923 | 0.785 | 0.857 |
+| PAConv | [Xu et al.](https://arxiv.org/abs/2103.14635) | Yes | 0.927 | 0.848 | 0.859 |
+| PointTransformers | [Zhao et al.](https://arxiv.org/abs/2012.09164) | Yes | 1.049 | 0.933 | 0.840 |
+| PointMLP | [Ma et al.](https://arxiv.org/abs/2202.07123) | Yes | 0.977 | 0.810 | 0.853 |
+| PointBERT | [Yu et al.](https://arxiv.org/abs/2111.14819) | Yes | 1.033 | 0.895 | 0.855 |
+| PointMAE | [Pang et al.](https://arxiv.org/abs/2203.06604) | Yes | 0.927 | 0.703 | 0.860 |
+
+*Note: Standalone indicates whether or not the method is a standalone architecture or a combination with augmentation or pretrain.
+
+
+## Evaluation
+Evaluation commands are provided in [EVALUATE.md](docs/EVALUATE.md).
+
+
+## Customize Evaluation
+We have provided evaluation utilities to help you evaluate on ModelNet-C using your own codebase.
+Please follow [CUSTOMIZE.md](docs/CUSTOMIZE.md).
+
+
+## Build PointCloud-C
+You can manage to generate your own "PointCloud-C"! Follow the instructions in [GENERATE.md](docs/GENERATE.md).
+
+
+## TODO List
+- [x] Initial release. π
+- [x] Add license. See [here](#license) for more details.
+- [x] Release test sets. Download ModelNet-C and ShapeNet-C from our project page.
+- [x] Add evaluation scripts for classification models.
+- [ ] Add evaluation scripts for part segmentation models.
+- [ ] Clean and retouch codebase.
+
+
+## License
+
+
+This work is under the Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.
+
+
+## Acknowledgement
+
+We acknowledge the use of the following public resources during the course of this work:
+1[SimpleView](https://github.com/princeton-vl/SimpleView),
+2[PCT](https://github.com/Strawberry-Eat-Mango/PCT_Pytorch),
+3[GDANet](https://github.com/mutianxu/GDANet),
+4[CurveNet](https://github.com/tiangexiang/CurveNet),
+5[PAConv](https://github.com/CVMI-Lab/PAConv),
+6[RSMix](https://github.com/dogyoonlee/RSMix),
+7[PointWOLF](https://github.com/mlvlab/PointWOLF),
+8[PointTransformers](https://github.com/qq456cvb/Point-Transformers),
+9[OcCo](https://github.com/hansen7/OcCo),
+10[PointMLP](https://github.com/ma-xu/pointMLP-pytorch),
+11[PointBERT](https://github.com/lulutang0608/Point-BERT),
+and 12[PointMAE](https://github.com/Pang-Yatian/Point-MAE).
+
+
+
+## Citation
+
+If you find this work helpful, please kindly consider citing our papers:
+
+```bibtex
+@ARTICLE{ren2022pointcloud-c,
+ title={Benchmarking and Analyzing Point Cloud Robustness under Corruptions},
+ author={Jiawei Ren and Lingdong Kong and Liang Pan and Ziwei Liu},
+ journal={arXiv:220x.xxxxx},
+ year={2022}
+}
+```
+
+```bibtex
+@ARTICLE{ren2022modelnet-c,
+ title={Benchmarking and Analyzing Point Cloud Classification under Corruptions},
+ author={Jiawei Ren and Liang Pan and Ziwei Liu},
+ journal={International Conference on Machine Learning (ICML)},
+ year={2022}
+}
+```
+
diff --git a/build/corrupt.py b/build/corrupt.py
new file mode 100644
index 0000000..20ca985
--- /dev/null
+++ b/build/corrupt.py
@@ -0,0 +1,90 @@
+import os
+import glob
+import h5py
+import numpy as np
+from corrupt_utils import corrupt_scale, corrupt_jitter, corrupt_rotate, corrupt_dropout_global, corrupt_dropout_local, \
+ corrupt_add_global, corrupt_add_local
+
+NUM_POINTS = 1024
+BASE_DIR = os.path.dirname(os.path.abspath(__file__))
+DATA_DIR = os.path.join(BASE_DIR, '../data')
+
+np.random.seed(0)
+
+corruptions = {
+ 'clean': None,
+ 'scale': corrupt_scale,
+ 'jitter': corrupt_jitter,
+ 'rotate': corrupt_rotate,
+ 'dropout_global': corrupt_dropout_global,
+ 'dropout_local': corrupt_dropout_local,
+ 'add_global': corrupt_add_global,
+ 'add_local': corrupt_add_local,
+}
+
+
+def download():
+ if not os.path.exists(DATA_DIR):
+ os.mkdir(DATA_DIR)
+ if not os.path.exists(os.path.join(DATA_DIR, 'modelnet40_ply_hdf5_2048')):
+ www = 'https://shapenet.cs.stanford.edu/media/modelnet40_ply_hdf5_2048.zip'
+ zipfile = os.path.basename(www)
+ os.system('wget %s --no-check-certificate; unzip %s' % (www, zipfile))
+ os.system('mv %s %s' % (zipfile[:-4], DATA_DIR))
+ os.system('rm %s' % (zipfile))
+
+
+def load_data(partition):
+ download()
+ all_data = []
+ all_label = []
+ for h5_name in glob.glob(os.path.join(DATA_DIR, 'modelnet40_ply_hdf5_2048', 'ply_data_%s*.h5' % partition)):
+ f = h5py.File(h5_name, 'r')
+ data = f['data'][:].astype('float32')
+ label = f['label'][:].astype('int64')
+ f.close()
+ all_data.append(data)
+ all_label.append(label)
+ all_data = np.concatenate(all_data, axis=0)
+ all_label = np.concatenate(all_label, axis=0)
+ all_data = all_data[:, :NUM_POINTS, :]
+ return all_data, all_label
+
+
+def save_data(all_data, all_label, corruption_type, level):
+ if not os.path.exists(os.path.join(DATA_DIR, 'modelnet_c')):
+ os.makedirs(os.path.join(DATA_DIR, 'modelnet_c'))
+ if corruption_type == 'clean':
+ h5_name = os.path.join(DATA_DIR, 'modelnet_c', '{}.h5'.format(corruption_type))
+ else:
+ h5_name = os.path.join(DATA_DIR, 'modelnet_c', '{}_{}.h5'.format(corruption_type, level))
+ f = h5py.File(h5_name, 'w')
+ f.create_dataset('data', data=all_data)
+ f.create_dataset('label', data=all_label)
+ f.close()
+ print("{} finished".format(h5_name))
+
+
+def corrupt_data(all_data, type, level):
+ if type == 'clean':
+ return all_data
+ corrupted_data = []
+ for pcd in all_data:
+ corrupted_pcd = corruptions[type](pcd, level)
+ corrupted_data.append(corrupted_pcd)
+ corrupted_data = np.stack(corrupted_data, axis=0)
+ return corrupted_data
+
+
+def main():
+ all_data, all_label = load_data('test')
+ for corruption_type in corruptions:
+ for level in range(5):
+ corrupted_data = corrupt_data(all_data, corruption_type, level)
+ save_data(corrupted_data, all_label, corruption_type, level)
+ if corruption_type == 'clean':
+ break
+
+
+if __name__ == '__main__':
+ main()
diff --git a/build/corrupt_utils.py b/build/corrupt_utils.py
new file mode 100644
index 0000000..2d52c5f
--- /dev/null
+++ b/build/corrupt_utils.py
@@ -0,0 +1,180 @@
+import numpy as np
+import math
+
+
+def _pc_normalize(pc):
+ """
+ Normalize the point cloud to a unit sphere
+ :param pc: input point cloud
+ :return: normalized point cloud
+ """
+ centroid = np.mean(pc, axis=0)
+ pc = pc - centroid
+ m = np.max(np.sqrt(np.sum(pc ** 2, axis=1)))
+ pc = pc / m
+ return pc
+
+
+def _shuffle_pointcloud(pcd):
+ """
+ Shuffle the points
+ :param pcd: input point cloud
+ :return: shuffled point clouds
+ """
+ idx = np.random.rand(pcd.shape[0], 1).argsort(axis=0)
+ return np.take_along_axis(pcd, idx, axis=0)
+
+
+def _gen_random_cluster_sizes(num_clusters, total_cluster_size):
+ """
+ Generate random cluster sizes
+ :param num_clusters: number of clusters
+ :param total_cluster_size: total size of all clusters
+ :return: a list of each cluster size
+ """
+ rand_list = np.random.randint(num_clusters, size=total_cluster_size)
+ cluster_size_list = [sum(rand_list == i) for i in range(num_clusters)]
+ return cluster_size_list
+
+
+def _sample_points_inside_unit_sphere(number_of_particles):
+ """
+ Uniformly sample points in a unit sphere
+ :param number_of_particles: number of points to sample
+ :return: sampled points
+ """
+ radius = np.random.uniform(0.0, 1.0, (number_of_particles, 1))
+ radius = np.power(radius, 1 / 3)
+ costheta = np.random.uniform(-1.0, 1.0, (number_of_particles, 1))
+ theta = np.arccos(costheta)
+ phi = np.random.uniform(0, 2 * np.pi, (number_of_particles, 1))
+ x = radius * np.sin(theta) * np.cos(phi)
+ y = radius * np.sin(theta) * np.sin(phi)
+ z = radius * np.cos(theta)
+ return np.concatenate([x, y, z], axis=1)
+
+
+def corrupt_scale(pointcloud, level):
+ """
+ Corrupt the scale of input point cloud
+ :param pointcloud: input point cloud
+ :param level: severity level
+ :return: corrupted point cloud
+ """
+ s = [1.6, 1.7, 1.8, 1.9, 2.0][level]
+ xyz = np.random.uniform(low=1. / s, high=s, size=[3])
+ return _pc_normalize(np.multiply(pointcloud, xyz).astype('float32'))
+
+
+def corrupt_jitter(pointcloud, level):
+ """
+ Jitter the input point cloud
+ :param pointcloud: input point cloud
+ :param level: severity level
+ :return: corrupted point cloud
+ """
+ sigma = 0.01 * (level + 1)
+ N, C = pointcloud.shape
+ pointcloud = pointcloud + sigma * np.random.randn(N, C)
+ return pointcloud
+
+
+def corrupt_rotate(pointcloud, level):
+ """
+ Randomly rotate the point cloud
+ :param pointcloud: input point cloud
+ :param level: severity level
+ :return: corrupted point cloud
+ """
+ angle_clip = math.pi / 6
+ angle_clip = angle_clip / 5 * (level + 1)
+ angles = np.random.uniform(-angle_clip, angle_clip, size=(3))
+ Rx = np.array([[1, 0, 0],
+ [0, np.cos(angles[0]), -np.sin(angles[0])],
+ [0, np.sin(angles[0]), np.cos(angles[0])]])
+ Ry = np.array([[np.cos(angles[1]), 0, np.sin(angles[1])],
+ [0, 1, 0],
+ [-np.sin(angles[1]), 0, np.cos(angles[1])]])
+ Rz = np.array([[np.cos(angles[2]), -np.sin(angles[2]), 0],
+ [np.sin(angles[2]), np.cos(angles[2]), 0],
+ [0, 0, 1]])
+ R = np.dot(Rz, np.dot(Ry, Rx))
+ return np.dot(pointcloud, R)
+
+
+def corrupt_dropout_global(pointcloud, level):
+ """
+ Drop random points globally
+ :param pointcloud: input point cloud
+ :param level: severity level
+ :return: corrupted point cloud
+ """
+ drop_rate = [0.25, 0.375, 0.5, 0.625, 0.75][level]
+ num_points = pointcloud.shape[0]
+ pointcloud = _shuffle_pointcloud(pointcloud)
+ pointcloud = pointcloud[:int(num_points * (1 - drop_rate)), :]
+ return pointcloud
+
+
+def corrupt_dropout_local(pointcloud, level):
+ """
+ Randomly drop local clusters
+ :param pointcloud: input point cloud
+ :param level: severity level
+ :return: corrupted point cloud
+ """
+ num_points = pointcloud.shape[0]
+ total_cluster_size = 100 * (level + 1)
+ num_clusters = np.random.randint(1, 8)
+ cluster_size_list = _gen_random_cluster_sizes(num_clusters, total_cluster_size)
+ for i in range(num_clusters):
+ K = cluster_size_list[i]
+ pointcloud = _shuffle_pointcloud(pointcloud)
+ dist = np.sum((pointcloud - pointcloud[:1, :]) ** 2, axis=1, keepdims=True)
+ idx = dist.argsort(axis=0)[::-1, :]
+ pointcloud = np.take_along_axis(pointcloud, idx, axis=0)
+ num_points -= K
+ pointcloud = pointcloud[:num_points, :]
+ return pointcloud
+
+
+def corrupt_add_global(pointcloud, level):
+ """
+ Add random points globally
+ :param pointcloud: input point cloud
+ :param level: severity level
+ :return: corrupted point cloud
+ """
+ npoints = 10 * (level + 1)
+ additional_pointcloud = _sample_points_inside_unit_sphere(npoints)
+ pointcloud = np.concatenate([pointcloud, additional_pointcloud[:npoints]], axis=0)
+ return pointcloud
+
+
+def corrupt_add_local(pointcloud, level):
+ """
+ Randomly add local clusters to a point cloud
+ :param pointcloud: input point cloud
+ :param level: severity level
+ :return: corrupted point cloud
+ """
+ num_points = pointcloud.shape[0]
+ total_cluster_size = 100 * (level + 1)
+ num_clusters = np.random.randint(1, 8)
+ cluster_size_list = _gen_random_cluster_sizes(num_clusters, total_cluster_size)
+ pointcloud = _shuffle_pointcloud(pointcloud)
+ add_pcd = np.zeros_like(pointcloud)
+ num_added = 0
+ for i in range(num_clusters):
+ K = cluster_size_list[i]
+ sigma = np.random.uniform(0.075, 0.125)
+ add_pcd[num_added:num_added + K, :] = np.copy(pointcloud[i:i + 1, :])
+ add_pcd[num_added:num_added + K, :] = add_pcd[num_added:num_added + K, :] + sigma * np.random.randn(
+ *add_pcd[num_added:num_added + K, :].shape)
+ num_added += K
+ assert num_added == total_cluster_size
+ dist = np.sum(add_pcd ** 2, axis=1, keepdims=True).repeat(3, axis=1)
+ add_pcd[dist > 1] = add_pcd[dist > 1] / dist[dist > 1] # ensure the added points are inside a unit sphere
+ pointcloud = np.concatenate([pointcloud, add_pcd], axis=0)
+ pointcloud = pointcloud[:num_points + total_cluster_size]
+ return pointcloud
diff --git a/docs/CUSTOMIZE.md b/docs/CUSTOMIZE.md
new file mode 100644
index 0000000..119ae78
--- /dev/null
+++ b/docs/CUSTOMIZE.md
@@ -0,0 +1,94 @@
+
+
+
+# Customize Evaluation for Your Own Codebase
+
+We have designed utilities to make evaluation on ModelNet-C easy.
+You may already have an evaluation code for the standard ModelNet.
+It takes three simple steps to make it work on ModelNet-C.
+
+### Step 1: Install and Import ModelNet-C Utility
+Install our utility by:
+```bash
+git clone https://github.com/jiawei-ren/ModelNet-C.git
+cd ModelNet-C
+pip install -e modelnetc_utils
+```
+Import our utility in your evaluation script for the standard ModelNet40:
+```python
+from modelnetc_utils import eval_corrupt_wrapper, ModelNetC
+```
+
+### Step 2: Modify the Test Function
+The test function on the standard ModelNet should look like:
+```python
+def test(args, model):
+ '''
+ Arguments:
+ args: necessary arguments like batch size and number of workers
+ model: the model to be tested
+ Return:
+ overall_accuracy: overall accuracy (OA)
+ '''
+ # Create test loader
+ test_loader = DataLoader(ModelNet40(...), ...)
+
+ # Run model on test loader to get the results
+ overall_accuracy = run_model_on_test_loader(model, test_loader)
+
+ # return the overall accuracy (OA)
+ return overall_accuracy
+```
+where `run_model_on_test_loader` is usually a for-loop that iterates through all test batches.
+
+To test on ModelNetC, we need an additional argument `split` to indicate the type of corruption. The modified test function should look like:
+```python
+def test_corrupt(args, model, split):
+ '''
+ Arguments:
+ args: necessary arguments like batch size and number of workers
+ model: the model to be tested
+ split: corruption type
+ Return:
+ overall_accuracy: overall accuracy (OA)
+ '''
+ # Replace ModelNet40 by ModelNetC
+ test_loader = DataLoader(ModelNetC(split=split), ...)
+
+ # Remains unchanged
+ overall_accuracy = run_model_on_test_loader(model, test_loader)
+
+ # Remains unchanged
+ return overall_accuracy
+```
+
+### Step 3: Call Our Wrapper Funcion
+The calling of the test function for the standard ModelNet40 should be:
+```python
+overall_accuracy = test(args, model)
+print("OA: {}".format(overall_accuracy))
+```
+For ModelNet-C, we provide a wrapper function to repeatedly call the test function for every corruption type and aggregate the results.
+We may conveniently use the wrapper function by:
+```python
+eval_corrupt_wrapper(model, test_corrupt, {'args': args})
+```
+
+### Example
+An example evaluation code for ModelNet-C is provided in [GDANet/main_cls.py](https://github.com/jiawei-ren/ModelNet-C/blob/main/GDANet/main_cls.py#L312).
+
+Example output:
+```bash
+# result on clean test set
+{'acc': 0.9359805510534847, 'avg_per_class_acc': 0.9017848837209301, 'corruption': 'clean'}
+{'OA': 0.9359805510534847, 'corruption': 'clean', 'level': 'Overall'}
+
+# result on scale corrupted test set
+{'acc': 0.9258508914100486, 'avg_per_class_acc': 0.8890872093023254, 'corruption': 'scale', 'level': 0}
+...
+{'acc': 0.9047811993517018, 'avg_per_class_acc': 0.8646802325581395, 'corruption': 'scale', 'level': 4}
+{'CE': 0.9008931342460089, 'OA': 0.9153160453808752, 'RCE': 1.0332252836304725, 'corruption': 'scale', 'level': 'Overall'}
+...
+# final result
+{'RmCE': 1.207452747764862, 'mCE': 1.1023796740168037, 'mOA': 0.7303542486686734}
+```
diff --git a/docs/DATA_PREPARE.md b/docs/DATA_PREPARE.md
new file mode 100644
index 0000000..de1d893
--- /dev/null
+++ b/docs/DATA_PREPARE.md
@@ -0,0 +1,55 @@
+
+
+# Prepare Data
+
+### Classification
+Download ModelNet-C by:
+```shell
+cd data
+gdown https://drive.google.com/uc?id=1KE6MmXMtfu_mgxg4qLPdEwVD5As8B0rm
+unzip modelnet_c.zip && cd ..
+```
+Alternatively, you may download ModelNet-C from our project page.
+
+
+### Part Segmentation
+Download ShapeNet-C by:
+```shell
+cd data
+gdown https://drive.google.com/uc?id=
+unzip shapenet_c.zip && cd ..
+```
+Alternatively, you may download ShapeNet-C from our project page.
+
+
+### Dataset Structure
+```
+root
+ ββββ dataset_c
+ ββββββ add_global_0.h5
+ ββββββ ...
+ ββββββ add_local_0.h5
+ ββββββ ...
+ ββββββ dropout_global_0.h5
+ ββββββ ...
+ ββββββ dropout_local_0.h5
+ ββββββ ...
+ ββββββ jitter_0.h5
+ ββββββ ...
+ ββββββ rotate_0.h5
+ ββββββ ...
+ ββββββ scale_0.h5
+ ββββββ ...
+ ββββββ clean.h5
+ ββββ README.txt
+ ```
+
+
+### License
+
+This benchmark is made freely available to academic and non-academic entities for non-commercial purposes such as academic research, teaching, scientific publications, or personal experimentation. Permission is granted to use the data given that you agree:
+
+- That the benchmark comes βAS ISβ, without express or implied warranty. Although every effort has been made to ensure accuracy, we do not accept any responsibility for errors or omissions.
+- That you may not use the benchmark or any derivative work for commercial purposes as, for example, licensing or selling the data, or using the data with a purpose to procure a commercial gain.
+- That you include a reference to PointCloud-C (including ModelNet-C, ShapeNet-C, and the specially generated data for academic challenges) in any work that makes use of the benchmark. For research papers, please cite our preferred publications as listed on our webpage.
+
diff --git a/docs/EVALUATE.md b/docs/EVALUATE.md
new file mode 100644
index 0000000..83269ce
--- /dev/null
+++ b/docs/EVALUATE.md
@@ -0,0 +1,86 @@
+
+
+# Evaluation Script
+
+### Outline
+
+- [Classification](#classification)
+- [Part Segmentation](#part-segmentation)
+
+
+### Classification
+
+#### Architecture
+
+- DGCNN
+```shell
+python SimpleView/main.py --entry test_corrupt --model-path pretrained_models/DGCNN.pth --exp-config SimpleView/configs/dgcnn_dgcnn_run_1.yaml
+```
+- PointNet
+```shell
+python SimpleView/main.py --entry test_corrupt --model-path pretrained_models/PointNet.pth --exp-config SimpleView/configs/dgcnn_pointnet_run_1.yaml
+```
+- PointNet++
+```shell
+python SimpleView/main.py --entry test_corrupt --model-path pretrained_models/PointNet2.pth --exp-config SimpleView/configs/dgcnn_pointnet2_run_1.yaml
+```
+- RSCNN
+```shell
+python SimpleView/main.py --entry test_corrupt --model-path pretrained_models/RSCNN.pth --exp-config SimpleView/configs/dgcnn_rscnn_run_1.yaml
+```
+- SimpleView
+```shell
+python SimpleView/main.py --entry test_corrupt --model-path pretrained_models/SimpleView.pth --exp-config SimpleView/configs/dgcnn_simpleview_run_1.yaml
+```
+- PCT
+```shell
+python PCT/main.py --exp_name=test --num_points=1024 --use_sgd=True --eval_corrupt=True --model_path pretrained_models/PCT.t7 --test_batch_size 8 --model PCT
+```
+- GDANet
+```shell
+python GDANet/main_cls.py --eval_corrupt=True --model_path pretrained_models/GDANet.t7
+```
+- PAConv
+```shell
+python PAConv/obj_cls/main.py --config PAConv/obj_cls/config/dgcnn_paconv_test.yaml --model_path ../pcdrobustness/pretrained_models/PAConv.t7 --eval_corrupt True
+```
+- CurveNet
+```shell
+python3 CurveNet/core/main_cls.py --exp_name=test --eval_corrupt=True --model_path pretrained_models/CurveNet.t7
+```
+- RPC
+```shell
+python PCT/main.py --exp_name=test --num_points=1024 --use_sgd=True --eval_corrupt=True --model_path pretrained_models/RPC.t7 --test_batch_size 8 --model RPC
+```
+
+#### Augmentation
+
+- DGCNN + PointWOLF
+```shell
+python PointWOLF/main.py --exp_name=test --model=dgcnn --num_points=1024 --k=20 --use_sgd=True --eval_corrupt=True --model_path pretrained_models/DGCNN_PointWOLF.t7
+```
+- DGCNN + RSMix
+```shell
+python PointWOLF/main.py --exp_name=test --model=dgcnn --num_points=1024 --k=20 --use_sgd=True --eval_corrupt=True --model_path pretrained_models/DGCNN_RSMix.t7
+```
+- DGCNN + WOLFMix
+```shell
+python PointWOLF/main.py --exp_name=test --model=dgcnn --num_points=1024 --k=20 --use_sgd=True --eval_corrupt=True --model_path pretrained_models/DGCNN_WOLFMix.t7
+```
+- GDANet + WOLFMix
+```shell
+python GDANet/main_cls.py --eval_corrupt=True --model_path pretrained_models/GDANet_WOLFMix.t7
+```
+- RPC + WOLFMix (final)
+```shell
+python PCT/main.py --exp_name=test --num_points=1024 --use_sgd=True --eval_corrupt=True --model_path pretrained_models/RPC_WOLFMix_final.t7 --test_batch_size 8 --model RPC
+```
+
+
+### Part Segmentation
+
+Coming soon.
+
+
+
+
diff --git a/docs/GENERATE.md b/docs/GENERATE.md
new file mode 100644
index 0000000..827a44b
--- /dev/null
+++ b/docs/GENERATE.md
@@ -0,0 +1,11 @@
+
+
+# Generation Your Own Corruption Sets
+
+You may generate more "PointCloud-C" sets by:
+```shell
+python build/corrupt.py
+```
+
+:warning: Note that the script uses a **different** random seed from the official ModelNet-C and ShapeNet-C.
+One should NOT report results on self-generated corruption sets.
diff --git a/docs/GET_STARTED.md b/docs/GET_STARTED.md
new file mode 100644
index 0000000..a00c293
--- /dev/null
+++ b/docs/GET_STARTED.md
@@ -0,0 +1,28 @@
+
+
+# Getting Started
+
+### Clone the GitHub Repo
+```shell
+git clone https://github.com/ldkong1205/PointCloud-C.git
+cd PointCloud-C
+```
+
+### Set Up the Environment
+
+```shell
+conda create --name pointcloud-c python=3.7.5
+conda activate pointcloud-c
+pip install -r requirements.txt
+cd SimpleView/pointnet2_pyt && pip install -e . && cd -
+pip install -e pointcloudc_utils
+```
+
+### Download Pretrained Models
+
+Please download existing pretrained models by
+```shell
+gdown https://drive.google.com/uc?id=11RONLZGg0ezxC16n57PiEZouqC5L0b_h
+unzip pretrained_models.zip
+```
+Alternatively, you may download [pretrained models](https://drive.google.com/file/d/11RONLZGg0ezxC16n57PiEZouqC5L0b_h/view?usp=sharing) manually and extract it under root directory.
diff --git a/figs/.keep b/figs/.keep
new file mode 100644
index 0000000..8b13789
--- /dev/null
+++ b/figs/.keep
@@ -0,0 +1 @@
+
diff --git a/figs/logo.png b/figs/logo.png
new file mode 100644
index 0000000..cdf8f51
Binary files /dev/null and b/figs/logo.png differ
diff --git a/figs/teaser.png b/figs/teaser.png
new file mode 100644
index 0000000..fe08667
Binary files /dev/null and b/figs/teaser.png differ
diff --git a/pointcloudc_utils/README.md b/pointcloudc_utils/README.md
new file mode 100644
index 0000000..aba59bf
--- /dev/null
+++ b/pointcloudc_utils/README.md
@@ -0,0 +1,20 @@
+
+
+# Utils for Loading and Evaluating PointCloud-C
+
+
+### Install
+```shell
+pip install -e .
+```
+
+
+### Usage
+
+- `eval_corrupt_wrapper`
+
+ The wrapper helps to repeat the original testing function on all corrupted test sets. It also helps to compute metrics.
+
+- `PointCloud-C`
+
+ PointCloud-C loader. The default path is set to `../../data/pointcloud_c`. Please change the path in accordingly.
diff --git a/pointcloudc_utils/pointcloudc_utils/__init__.py b/pointcloudc_utils/pointcloudc_utils/__init__.py
new file mode 100644
index 0000000..85890c3
--- /dev/null
+++ b/pointcloudc_utils/pointcloudc_utils/__init__.py
@@ -0,0 +1,2 @@
+from .dataset import PointCloudC
+from .eval import eval_corrupt_wrapper
\ No newline at end of file
diff --git a/pointcloudc_utils/pointcloudc_utils/dataset.py b/pointcloudc_utils/pointcloudc_utils/dataset.py
new file mode 100644
index 0000000..131db27
--- /dev/null
+++ b/pointcloudc_utils/pointcloudc_utils/dataset.py
@@ -0,0 +1,28 @@
+import os
+import h5py
+from torch.utils.data import Dataset
+
+BASE_DIR = os.path.dirname(os.path.abspath(__file__))
+DATA_DIR = os.path.join(BASE_DIR, '../../data/pointcloud_c') # pls change the data dir accordingly
+
+
+def load_h5(h5_name):
+ f = h5py.File(h5_name, 'r')
+ data = f['data'][:].astype('float32')
+ label = f['label'][:].astype('int64')
+ f.close()
+ return data, label
+
+
+class ModelNetC(Dataset):
+ def __init__(self, split):
+ h5_path = os.path.join(DATA_DIR, split + '.h5')
+ self.data, self.label = load_h5(h5_path)
+
+ def __getitem__(self, item):
+ pointcloud = self.data[item]
+ label = self.label[item]
+ return pointcloud, label
+
+ def __len__(self):
+ return self.data.shape[0]
diff --git a/pointcloudc_utils/pointcloudc_utils/eval.py b/pointcloudc_utils/pointcloudc_utils/eval.py
new file mode 100644
index 0000000..bd653aa
--- /dev/null
+++ b/pointcloudc_utils/pointcloudc_utils/eval.py
@@ -0,0 +1,71 @@
+import pprint
+
+
+def eval_corrupt_wrapper(model, fn_test_corrupt, args_test_corrupt):
+ """
+ The wrapper helps to repeat the original testing function on all corrupted test sets.
+ It also helps to compute metrics.
+ :param model: model
+ :param fn_test_corrupt: original evaluation function, returns a dict of metrics, e.g., {'acc': 0.93}
+ :param args_test_corrupt: a dict of arguments to fn_test_corrupt, e.g., {'test_loader': loader}
+ :return:
+ """
+ corruptions = [
+ 'clean',
+ 'scale',
+ 'jitter',
+ 'rotate',
+ 'dropout_global',
+ 'dropout_local',
+ 'add_global',
+ 'add_local',
+ ]
+ DGCNN_OA = {
+ 'clean': 0.926,
+ 'scale': 0.906,
+ 'jitter': 0.684,
+ 'rotate': 0.785,
+ 'dropout_global': 0.752,
+ 'dropout_local': 0.793,
+ 'add_global': 0.705,
+ 'add_local': 0.725
+ }
+ OA_clean = None
+ perf_all = {'OA': [], 'CE': [], 'RCE': []}
+ for corruption_type in corruptions:
+ perf_corrupt = {'OA': []}
+ for level in range(5):
+ if corruption_type == 'clean':
+ split = "clean"
+ else:
+ split = corruption_type + '_' + str(level)
+ test_perf = fn_test_corrupt(split=split, model=model, **args_test_corrupt)
+ if not isinstance(test_perf, dict):
+ test_perf = {'acc': test_perf}
+ perf_corrupt['OA'].append(test_perf['acc'])
+ test_perf['corruption'] = corruption_type
+ if corruption_type != 'clean':
+ test_perf['level'] = level
+ pprint.pprint(test_perf, width=200)
+ if corruption_type == 'clean':
+ OA_clean = round(test_perf['acc'], 3)
+ break
+ for k in perf_corrupt:
+ perf_corrupt[k] = sum(perf_corrupt[k]) / len(perf_corrupt[k])
+ perf_corrupt[k] = round(perf_corrupt[k], 3)
+ if corruption_type != 'clean':
+ perf_corrupt['CE'] = (1 - perf_corrupt['OA']) / (1 - DGCNN_OA[corruption_type])
+ perf_corrupt['RCE'] = (OA_clean - perf_corrupt['OA']) / (DGCNN_OA['clean'] - DGCNN_OA[corruption_type])
+ for k in perf_all:
+ perf_corrupt[k] = round(perf_corrupt[k], 3)
+ perf_all[k].append(perf_corrupt[k])
+ perf_corrupt['corruption'] = corruption_type
+ perf_corrupt['level'] = 'Overall'
+ pprint.pprint(perf_corrupt, width=200)
+ for k in perf_all:
+ perf_all[k] = sum(perf_all[k]) / len(perf_all[k])
+ perf_all[k] = round(perf_all[k], 3)
+ perf_all['mCE'] = perf_all.pop('CE')
+ perf_all['RmCE'] = perf_all.pop('RCE')
+ perf_all['mOA'] = perf_all.pop('OA')
+ pprint.pprint(perf_all, width=200)
diff --git a/pointcloudc_utils/setup.py b/pointcloudc_utils/setup.py
new file mode 100644
index 0000000..ad3284f
--- /dev/null
+++ b/pointcloudc_utils/setup.py
@@ -0,0 +1,9 @@
+from setuptools import setup, find_packages
+
+VERSION = '0.1.0'
+PACKAGE_NAME = 'pointcloudc-utils'
+
+setup(name=PACKAGE_NAME,
+ version=VERSION,
+ packages=find_packages()
+ )
diff --git a/requirements.txt b/requirements.txt
new file mode 100644
index 0000000..a7a3ce1
--- /dev/null
+++ b/requirements.txt
@@ -0,0 +1,13 @@
+git+git://github.com/imankgoyal/etw_pytorch_utils.git@v1.1.1#egg=etw_pytorch_utils
+enum34
+future
+h5py==2.10.0
+progressbar2==3.50.0
+tensorboardX==2.0
+-f https://download.pytorch.org/whl/torch_stable.html
+torch==1.4.0+cu100
+-f https://download.pytorch.org/whl/torch_stable.html
+torchvision==0.5.0+cu100
+yacs==0.1.6
+gdown==4.2.0
+scikit-learn==1.0.2
\ No newline at end of file
diff --git a/zoo/CurveNet/README.md b/zoo/CurveNet/README.md
new file mode 100644
index 0000000..c0f7436
--- /dev/null
+++ b/zoo/CurveNet/README.md
@@ -0,0 +1,194 @@
+# CurveNet
+Official implementation of "Walk in the Cloud: Learning Curves for Point Clouds Shape Analysis", ICCV 2021
+
+[](https://paperswithcode.com/sota/3d-point-cloud-classification-on-modelnet40?p=walk-in-the-cloud-learning-curves-for-point)
+[](https://paperswithcode.com/sota/3d-part-segmentation-on-shapenet-part?p=walk-in-the-cloud-learning-curves-for-point)
+
+Paper: https://arxiv.org/abs/2105.01288
+
+
+
+## Requirements
+- Python>=3.7
+- PyTorch>=1.2
+- Packages: glob, h5py, sklearn
+
+## Contents
+- [Point Cloud Classification](#point-cloud-classification)
+- [Point Cloud Part Segmentation](#point-cloud-part-segmentation)
+- [Point Cloud Normal Estimation](#point-cloud-normal-estimation)
+
+**NOTE:** Please change your current directory to ```core/``` first before excuting the following commands.
+
+## Point Cloud Classification
+### Data
+
+The ModelNet40 dataset is primarily used for the classification experiments. At your first run, the program will automatically download the data if it is not in ```data/```. Or, you can manually download the [offical data](https://shapenet.cs.stanford.edu/media/modelnet40_ply_hdf5_2048.zip) and unzip to ```data/```.
+
+Alternatively, you can place your downloaded data anywhere you like, and link the path to ```DATA_DIR``` in ```core/data.py```. Otherwise, the download will still be automatically triggered.
+
+### Train
+
+Train with our default settings (same as in the paper):
+
+```
+python3 main_cls.py --exp_name=curvenet_cls_1
+```
+
+Train with customized settings with the flags: ```--lr```, ```--scheduler```, ```--batch_size```.
+
+Alternatively, you can directly modify ```core/start_cls.sh``` and simply run:
+
+```
+./start_cls.sh
+```
+
+**NOTE:** Our reported model achieves **93.8%/94.2%** accuracy (see sections below). However, due to randomness, the best result might require repeated training processes. Hence, we also provide another benchmark result here (where we repeated 5 runs with different random seeds, and report their average), which is **93.65%** accuracy.
+
+
+
+### Evaluation
+
+
+Evaluate without voting:
+```
+python3 main_cls.py --exp_name=curvenet_cls_1 --eval=True --model_path=PATH_TO_YOUR_MODEL
+```
+
+Alternatively, you can directly modify ```core/test_cls.sh``` and simply run:
+```
+./test_cls.sh
+```
+
+For voting, we used the ```voting_evaluate_cls.py```script provided in [RSCNN](https://github.com/Yochengliu/Relation-Shape-CNN). Please refer to their license for usage.
+
+### Evaluation with our pretrained model:
+
+Please download our pretrained model ```cls/``` at [google drive](https://drive.google.com/drive/folders/1kX-zIipyzB0iMaopcijzdTRuHeTzfTSz?usp=sharing).
+
+And then run:
+
+```
+python3 main_cls.py --exp_name=curvenet_cls_pretrained --eval --model_path=PATH_TO_PRETRAINED/cls/models/model.t7
+```
+
+
+## Point Cloud Part Segmentation
+### Data
+
+The ShapeNet Part dataset is primarily used for the part segmentation experiments. At your first run, the program will automatically download the data if it is not in ```data/```. Or, you can manually download the [offical data](https://shapenet.cs.stanford.edu/media/shapenet_part_seg_hdf5_data.zip) and unzip to ```data/```.
+
+Alternatively, you can place your downloaded data anywhere you like, and link the path to ```DATA_DIR``` in ```core/data.py```. Otherwise, the download will still be automatically triggered.
+
+### Train
+
+Train with our default settings (same as in the paper):
+
+```
+python3 main_partseg.py --exp_name=curvenet_seg_1
+```
+
+Train with customized settings with the flags: ```--lr```, ```--scheduler```, ```--batch_size```.
+
+Alternatively, you can directly modify ```core/start_part.sh``` and simply run:
+
+```
+./start_part.sh
+```
+
+**NOTE:** Our reported model achieves **86.6%/86.8%** mIoU (see sections below). However, due to randomness, the best result might require repeated training processes. Hence, we also provide another benchmark result here (where we repeated 5 runs with different random seeds, and report their average), which is **86.46** mIoU.
+
+
+
+### Evaluation
+
+Evaluate without voting:
+```
+python3 main_partseg.py --exp_name=curvenet_seg_1 --eval=True --model_path=PATH_TO_YOUR_MODEL
+```
+
+Alternatively, you can directly modify ```core/test_cls.sh``` and simply run:
+```
+./test_cls.sh
+```
+
+For voting, we used the ```voting_evaluate_partseg.py```script provided in [RSCNN](https://github.com/Yochengliu/Relation-Shape-CNN). Please refer to their license for usage.
+
+### Evaluation with our pretrained model:
+
+Please download our pretrained model ```partseg/``` at [google drive](https://drive.google.com/drive/folders/1kX-zIipyzB0iMaopcijzdTRuHeTzfTSz?usp=sharing).
+
+And then run:
+
+```
+python3 main_partseg.py --exp_name=curvenet_seg_pretrained --eval=True --model_path=PATH_TO_PRETRAINED/partseg/models/model.t7
+```
+
+
+## Point Cloud Normal Estimation
+
+### Data
+
+The ModelNet40 dataset is used for the normal estimation experiments. We have preprocessed the raw ModelNet40 dataset into ```.h5``` files. Each point cloud instance contains 2048 randomly sampled points and point-to-point normal ground truths.
+
+Please download our processed data [here](https://drive.google.com/file/d/1j6lB3ZOF0_x_l9bqdchAxIYBi7Devie8/view?usp=sharing) and place it to ```data/```, or you need to specify the data root path in ```core/data.py```.
+
+### Train
+
+Train with our default settings (same as in the paper):
+
+```
+python3 main_normal.py --exp_name=curvenet_normal_1
+```
+
+Train with customized settings with the flags: ```--multiplier```, ```--lr```, ```--scheduler```, ```--batch_size```.
+
+Alternatively, you can directly modify ```core/start_normal.sh``` and simply run:
+
+```
+./start_normal.sh
+```
+
+### Evaluation
+
+Evaluate without voting:
+```
+python3 main_normal.py --exp_name=curvenet_normal_1 --eval=True --model_path=PATH_TO_YOUR_MODEL
+```
+
+Alternatively, you can directly modify ```core/test_normal.sh``` and simply run:
+```
+./test_normal.sh
+```
+
+### Evaluation with our pretrained model:
+
+Please download our pretrained model ```normal/``` at [google drive](https://drive.google.com/drive/folders/1kX-zIipyzB0iMaopcijzdTRuHeTzfTSz?usp=sharing).
+
+And then run:
+
+```
+python3 main_normal.py --exp_name=curvenet_normal_pretrained --eval=True --model_path=PATH_TO_PRETRAINED/normal/models/model.t7
+```
+
+## Citation
+
+If you find this repo useful in your work or research, please cite:
+
+```
+@InProceedings{Xiang_2021_ICCV,
+ author = {Xiang, Tiange and Zhang, Chaoyi and Song, Yang and Yu, Jianhui and Cai, Weidong},
+ title = {Walk in the Cloud: Learning Curves for Point Clouds Shape Analysis},
+ booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)},
+ month = {October},
+ year = {2021},
+ pages = {915-924}
+}
+```
+
+## Acknowledgement
+
+Our code borrows a lot from:
+- [DGCNN](https://github.com/WangYueFt/dgcnn)
+- [DGCNN.pytorch](https://github.com/AnTao97/dgcnn.pytorch)
+- [CloserLook3D](https://github.com/zeliu98/CloserLook3D)
diff --git a/zoo/CurveNet/core/data.py b/zoo/CurveNet/core/data.py
new file mode 100644
index 0000000..161d622
--- /dev/null
+++ b/zoo/CurveNet/core/data.py
@@ -0,0 +1,195 @@
+"""
+@Author: Yue Wang
+@Contact: yuewangx@mit.edu
+@File: data.py
+@Time: 2018/10/13 6:21 PM
+
+Modified by
+@Author: Tiange Xiang
+@Contact: txia7609@uni.sydney.edu.au
+@Time: 2021/1/21 3:10 PM
+"""
+
+
+import os
+import sys
+import glob
+import h5py
+import numpy as np
+import torch
+from torch.utils.data import Dataset
+
+
+# change this to your data root
+DATA_DIR = '../data/'
+
+def download_modelnet40():
+ if not os.path.exists(DATA_DIR):
+ os.mkdir(DATA_DIR)
+ if not os.path.exists(os.path.join(DATA_DIR, 'modelnet40_ply_hdf5_2048')):
+ os.mkdir(os.path.join(DATA_DIR, 'modelnet40_ply_hdf5_2048'))
+ www = 'https://shapenet.cs.stanford.edu/media/modelnet40_ply_hdf5_2048.zip'
+ zipfile = os.path.basename(www)
+ os.system('wget %s --no-check-certificate; unzip %s' % (www, zipfile))
+ os.system('mv %s %s' % (zipfile[:-4], DATA_DIR))
+ os.system('rm %s' % (zipfile))
+
+
+def download_shapenetpart():
+ if not os.path.exists(DATA_DIR):
+ os.mkdir(DATA_DIR)
+ if not os.path.exists(os.path.join(DATA_DIR, 'shapenet_part_seg_hdf5_data')):
+ os.mkdir(os.path.join(DATA_DIR, 'shapenet_part_seg_hdf5_data'))
+ www = 'https://shapenet.cs.stanford.edu/media/shapenet_part_seg_hdf5_data.zip'
+ zipfile = os.path.basename(www)
+ os.system('wget %s --no-check-certificate; unzip %s' % (www, zipfile))
+ os.system('mv %s %s' % (zipfile[:-4], os.path.join(DATA_DIR, 'shapenet_part_seg_hdf5_data')))
+ os.system('rm %s' % (zipfile))
+
+
+def load_data_normal(partition):
+ f = h5py.File(os.path.join(DATA_DIR, 'modelnet40_normal', 'normal_%s.h5'%partition), 'r+')
+ data = f['xyz'][:].astype('float32')
+ label = f['normal'][:].astype('float32')
+ f.close()
+ return data, label
+
+
+def load_data_cls(partition):
+ download_modelnet40()
+ all_data = []
+ all_label = []
+ for h5_name in glob.glob(os.path.join(DATA_DIR, 'modelnet40*hdf5_2048', '*%s*.h5'%partition)):
+ f = h5py.File(h5_name, 'r+')
+ data = f['data'][:].astype('float32')
+ label = f['label'][:].astype('int64')
+ f.close()
+ all_data.append(data)
+ all_label.append(label)
+ all_data = np.concatenate(all_data, axis=0)
+ all_label = np.concatenate(all_label, axis=0)
+ return all_data, all_label
+
+
+def load_data_partseg(partition):
+ download_shapenetpart()
+ all_data = []
+ all_label = []
+ all_seg = []
+ if partition == 'trainval':
+ file = glob.glob(os.path.join(DATA_DIR, 'shapenet_part_seg_hdf5_data', 'hdf5_data', '*train*.h5')) \
+ + glob.glob(os.path.join(DATA_DIR, 'shapenet_part_seg_hdf5_data', 'hdf5_data', '*val*.h5'))
+ else:
+ file = glob.glob(os.path.join(DATA_DIR, 'shapenet_part_seg_hdf5_data', 'hdf5_data', '*%s*.h5'%partition))
+ for h5_name in file:
+ f = h5py.File(h5_name, 'r+')
+ data = f['data'][:].astype('float32')
+ label = f['label'][:].astype('int64')
+ seg = f['pid'][:].astype('int64')
+ f.close()
+ all_data.append(data)
+ all_label.append(label)
+ all_seg.append(seg)
+ all_data = np.concatenate(all_data, axis=0)
+ all_label = np.concatenate(all_label, axis=0)
+ all_seg = np.concatenate(all_seg, axis=0)
+ return all_data, all_label, all_seg
+
+
+def translate_pointcloud(pointcloud):
+ xyz1 = np.random.uniform(low=2./3., high=3./2., size=[3])
+ xyz2 = np.random.uniform(low=-0.2, high=0.2, size=[3])
+
+ translated_pointcloud = np.add(np.multiply(pointcloud, xyz1), xyz2).astype('float32')
+ return translated_pointcloud
+
+
+def jitter_pointcloud(pointcloud, sigma=0.01, clip=0.02):
+ N, C = pointcloud.shape
+ pointcloud += np.clip(sigma * np.random.randn(N, C), -1*clip, clip)
+ return pointcloud
+
+
+def rotate_pointcloud(pointcloud):
+ theta = np.pi*2 * np.random.uniform()
+ rotation_matrix = np.array([[np.cos(theta), -np.sin(theta)],[np.sin(theta), np.cos(theta)]])
+ pointcloud[:,[0,2]] = pointcloud[:,[0,2]].dot(rotation_matrix) # random rotation (x,z)
+ return pointcloud
+
+
+class ModelNet40(Dataset):
+ def __init__(self, num_points, partition='train'):
+ self.data, self.label = load_data_cls(partition)
+ self.num_points = num_points
+ self.partition = partition
+
+ def __getitem__(self, item):
+ pointcloud = self.data[item][:self.num_points]
+ label = self.label[item]
+ if self.partition == 'train':
+ pointcloud = translate_pointcloud(pointcloud)
+ #pointcloud = rotate_pointcloud(pointcloud)
+ np.random.shuffle(pointcloud)
+ return pointcloud, label
+
+ def __len__(self):
+ return self.data.shape[0]
+
+class ModelNetNormal(Dataset):
+ def __init__(self, num_points, partition='train'):
+ self.data, self.label = load_data_normal(partition)
+ self.num_points = num_points
+ self.partition = partition
+
+ def __getitem__(self, item):
+ pointcloud = self.data[item][:self.num_points]
+ label = self.label[item][:self.num_points]
+ if self.partition == 'train':
+ #pointcloud = translate_pointcloud(pointcloud)
+ idx = np.arange(0, pointcloud.shape[0], dtype=np.int64)
+ np.random.shuffle(idx)
+ pointcloud = self.data[item][idx]
+ label = self.label[item][idx]
+ return pointcloud, label
+
+ def __len__(self):
+ return self.data.shape[0]
+
+class ShapeNetPart(Dataset):
+ def __init__(self, num_points=2048, partition='train', class_choice=None):
+ self.data, self.label, self.seg = load_data_partseg(partition)
+ self.cat2id = {'airplane': 0, 'bag': 1, 'cap': 2, 'car': 3, 'chair': 4,
+ 'earphone': 5, 'guitar': 6, 'knife': 7, 'lamp': 8, 'laptop': 9,
+ 'motor': 10, 'mug': 11, 'pistol': 12, 'rocket': 13, 'skateboard': 14, 'table': 15}
+ self.seg_num = [4, 2, 2, 4, 4, 3, 3, 2, 4, 2, 6, 2, 3, 3, 3, 3]
+ self.index_start = [0, 4, 6, 8, 12, 16, 19, 22, 24, 28, 30, 36, 38, 41, 44, 47]
+ self.num_points = num_points
+ self.partition = partition
+ self.class_choice = class_choice
+
+ if self.class_choice != None:
+ id_choice = self.cat2id[self.class_choice]
+ indices = (self.label == id_choice).squeeze()
+ self.data = self.data[indices]
+ self.label = self.label[indices]
+ self.seg = self.seg[indices]
+ self.seg_num_all = self.seg_num[id_choice]
+ self.seg_start_index = self.index_start[id_choice]
+ else:
+ self.seg_num_all = 50
+ self.seg_start_index = 0
+
+ def __getitem__(self, item):
+ pointcloud = self.data[item][:self.num_points]
+ label = self.label[item]
+ seg = self.seg[item][:self.num_points]
+ if self.partition == 'trainval':
+ pointcloud = translate_pointcloud(pointcloud)
+ indices = list(range(pointcloud.shape[0]))
+ np.random.shuffle(indices)
+ pointcloud = pointcloud[indices]
+ seg = seg[indices]
+ return pointcloud, label, seg
+
+ def __len__(self):
+ return self.data.shape[0]
diff --git a/zoo/CurveNet/core/main_cls.py b/zoo/CurveNet/core/main_cls.py
new file mode 100644
index 0000000..46d01a1
--- /dev/null
+++ b/zoo/CurveNet/core/main_cls.py
@@ -0,0 +1,265 @@
+"""
+@Author: Yue Wang
+@Contact: yuewangx@mit.edu
+@File: main_cls.py
+@Time: 2018/10/13 10:39 PM
+
+Modified by
+@Author: Tiange Xiang
+@Contact: txia7609@uni.sydney.edu.au
+@Time: 2021/01/21 3:10 PM
+"""
+
+from __future__ import print_function
+import os
+import argparse
+import torch
+import torch.nn as nn
+import torch.nn.functional as F
+import torch.optim as optim
+from torch.optim.lr_scheduler import CosineAnnealingLR, MultiStepLR
+from data import ModelNet40
+from models.curvenet_cls import CurveNet
+import numpy as np
+from torch.utils.data import DataLoader
+from util import cal_loss, IOStream
+import sklearn.metrics as metrics
+from modelnetc_utils import eval_corrupt_wrapper, ModelNetC
+
+
+def _init_():
+ # fix random seed
+ torch.manual_seed(seed)
+ np.random.seed(seed)
+ torch.cuda.manual_seed_all(seed)
+ torch.cuda.manual_seed(seed)
+ torch.set_printoptions(10)
+ torch.backends.cudnn.benchmark = False
+ torch.backends.cudnn.deterministic = True
+ os.environ['PYTHONHASHSEED'] = str(seed)
+
+ # prepare file structures
+ if not os.path.exists('../checkpoints'):
+ os.makedirs('../checkpoints')
+ if not os.path.exists('../checkpoints/'+args.exp_name):
+ os.makedirs('../checkpoints/'+args.exp_name)
+ if not os.path.exists('../checkpoints/'+args.exp_name+'/'+'models'):
+ os.makedirs('../checkpoints/'+args.exp_name+'/'+'models')
+ os.system('cp main_cls.py ../checkpoints/'+args.exp_name+'/main_cls.py.backup')
+ os.system('cp models/curvenet_cls.py ../checkpoints/'+args.exp_name+'/curvenet_cls.py.backup')
+
+def train(args, io):
+ train_loader = DataLoader(ModelNet40(partition='train', num_points=args.num_points), num_workers=8,
+ batch_size=args.batch_size, shuffle=True, drop_last=True)
+ test_loader = DataLoader(ModelNet40(partition='test', num_points=args.num_points), num_workers=8,
+ batch_size=args.test_batch_size, shuffle=False, drop_last=False)
+
+ device = torch.device("cuda" if args.cuda else "cpu")
+ io.cprint("Let's use" + str(torch.cuda.device_count()) + "GPUs!")
+
+ # create model
+ model = CurveNet().to(device)
+ model = nn.DataParallel(model)
+
+ if args.use_sgd:
+ io.cprint("Use SGD")
+ opt = optim.SGD(model.parameters(), lr=args.lr*100, momentum=args.momentum, weight_decay=1e-4)
+ else:
+ io.cprint("Use Adam")
+ opt = optim.Adam(model.parameters(), lr=args.lr, weight_decay=1e-4)
+
+ if args.scheduler == 'cos':
+ scheduler = CosineAnnealingLR(opt, args.epochs, eta_min=1e-3)
+ elif args.scheduler == 'step':
+ scheduler = MultiStepLR(opt, [120, 160], gamma=0.1)
+
+ criterion = cal_loss
+
+ best_test_acc = 0
+ for epoch in range(args.epochs):
+ ####################
+ # Train
+ ####################
+ train_loss = 0.0
+ count = 0.0
+ model.train()
+ train_pred = []
+ train_true = []
+ for data, label in train_loader:
+ data, label = data.to(device), label.to(device).squeeze()
+ data = data.permute(0, 2, 1)
+ batch_size = data.size()[0]
+ opt.zero_grad()
+ logits = model(data)
+ loss = criterion(logits, label)
+ loss.backward()
+ torch.nn.utils.clip_grad_norm_(model.parameters(), 1)
+ opt.step()
+ preds = logits.max(dim=1)[1]
+ count += batch_size
+ train_loss += loss.item() * batch_size
+ train_true.append(label.cpu().numpy())
+ train_pred.append(preds.detach().cpu().numpy())
+ if args.scheduler == 'cos':
+ scheduler.step()
+ elif args.scheduler == 'step':
+ if opt.param_groups[0]['lr'] > 1e-5:
+ scheduler.step()
+ if opt.param_groups[0]['lr'] < 1e-5:
+ for param_group in opt.param_groups:
+ param_group['lr'] = 1e-5
+
+ train_true = np.concatenate(train_true)
+ train_pred = np.concatenate(train_pred)
+ outstr = 'Train %d, loss: %.6f, train acc: %.6f' % (epoch, train_loss*1.0/count,
+ metrics.accuracy_score(
+ train_true, train_pred))
+ io.cprint(outstr)
+
+ ####################
+ # Test
+ ####################
+ test_loss = 0.0
+ count = 0.0
+ model.eval()
+ test_pred = []
+ test_true = []
+ for data, label in test_loader:
+ data, label = data.to(device), label.to(device).squeeze()
+ data = data.permute(0, 2, 1)
+ batch_size = data.size()[0]
+ logits = model(data)
+ loss = criterion(logits, label)
+ preds = logits.max(dim=1)[1]
+ count += batch_size
+ test_loss += loss.item() * batch_size
+ test_true.append(label.cpu().numpy())
+ test_pred.append(preds.detach().cpu().numpy())
+ test_true = np.concatenate(test_true)
+ test_pred = np.concatenate(test_pred)
+ test_acc = metrics.accuracy_score(test_true, test_pred)
+ outstr = 'Test %d, loss: %.6f, test acc: %.6f' % (epoch, test_loss*1.0/count, test_acc)
+ io.cprint(outstr)
+ if test_acc >= best_test_acc:
+ best_test_acc = test_acc
+ torch.save(model.state_dict(), '../checkpoints/%s/models/model.t7' % args.exp_name)
+ io.cprint('best: %.3f' % best_test_acc)
+
+def test(args, io):
+ test_loader = DataLoader(ModelNet40(partition='test', num_points=args.num_points),
+ batch_size=args.test_batch_size, shuffle=False, drop_last=False)
+
+ device = torch.device("cuda" if args.cuda else "cpu")
+
+ #Try to load models
+ model = CurveNet().to(device)
+ model = nn.DataParallel(model)
+ model.load_state_dict(torch.load(args.model_path))
+
+ model = model.eval()
+ test_acc = 0.0
+ count = 0.0
+ test_true = []
+ test_pred = []
+ for data, label in test_loader:
+
+ data, label = data.to(device), label.to(device).squeeze()
+ data = data.permute(0, 2, 1)
+ batch_size = data.size()[0]
+ logits = model(data)
+ preds = logits.max(dim=1)[1]
+ test_true.append(label.cpu().numpy())
+ test_pred.append(preds.detach().cpu().numpy())
+ test_true = np.concatenate(test_true)
+ test_pred = np.concatenate(test_pred)
+ test_acc = metrics.accuracy_score(test_true, test_pred)
+ outstr = 'Test :: test acc: %.6f'%(test_acc)
+ io.cprint(outstr)
+
+
+if __name__ == "__main__":
+ # Training settings
+ parser = argparse.ArgumentParser(description='Point Cloud Recognition')
+ parser.add_argument('--exp_name', type=str, default='exp', metavar='N',
+ help='Name of the experiment')
+ parser.add_argument('--dataset', type=str, default='modelnet40', metavar='N',
+ choices=['modelnet40'])
+ parser.add_argument('--batch_size', type=int, default=32, metavar='batch_size',
+ help='Size of batch)')
+ parser.add_argument('--test_batch_size', type=int, default=16, metavar='batch_size',
+ help='Size of batch)')
+ parser.add_argument('--epochs', type=int, default=200, metavar='N',
+ help='number of episode to train ')
+ parser.add_argument('--use_sgd', type=bool, default=True,
+ help='Use SGD')
+ parser.add_argument('--lr', type=float, default=0.001, metavar='LR',
+ help='learning rate (default: 0.001, 0.1 if using sgd)')
+ parser.add_argument('--momentum', type=float, default=0.9, metavar='M',
+ help='SGD momentum (default: 0.9)')
+ parser.add_argument('--scheduler', type=str, default='cos', metavar='N',
+ choices=['cos', 'step'],
+ help='Scheduler to use, [cos, step]')
+ parser.add_argument('--no_cuda', type=bool, default=False,
+ help='enables CUDA training')
+ parser.add_argument('--eval', type=bool, default=False,
+ help='evaluate the model')
+ parser.add_argument('--eval_corrupt', type=bool, default=False,
+ help='evaluate the model under corruption')
+ parser.add_argument('--num_points', type=int, default=1024,
+ help='num of points to use')
+ parser.add_argument('--model_path', type=str, default='', metavar='N',
+ help='Pretrained model path')
+ args = parser.parse_args()
+
+ seed = np.random.randint(1, 10000)
+
+ _init_()
+
+ if args.eval or args.eval_corrupt:
+ io = IOStream('../checkpoints/' + args.exp_name + '/eval.log')
+ else:
+ io = IOStream('../checkpoints/' + args.exp_name + '/run.log')
+ io.cprint(str(args))
+ io.cprint('random seed is: ' + str(seed))
+
+ args.cuda = not args.no_cuda and torch.cuda.is_available()
+
+ if args.cuda:
+ io.cprint(
+ 'Using GPU : ' + str(torch.cuda.current_device()) + ' from ' + str(torch.cuda.device_count()) + ' devices')
+ else:
+ io.cprint('Using CPU')
+
+ if not args.eval and not args.eval_corrupt:
+ train(args, io)
+ elif args.eval:
+ with torch.no_grad():
+ test(args, io)
+ elif args.eval_corrupt:
+ with torch.no_grad():
+ device = torch.device("cuda" if args.cuda else "cpu")
+ model = CurveNet().to(device)
+ model = nn.DataParallel(model)
+ model.load_state_dict(torch.load(args.model_path))
+ model = model.eval()
+
+ def test_corrupt(args, split, model):
+ test_loader = DataLoader(ModelNetC(split=split),
+ batch_size=args.test_batch_size, shuffle=True, drop_last=False)
+ test_true = []
+ test_pred = []
+ for data, label in test_loader:
+ data, label = data.to(device), label.to(device).squeeze()
+ data = data.permute(0, 2, 1)
+ logits = model(data)
+ preds = logits.max(dim=1)[1]
+ test_true.append(label.cpu().numpy())
+ test_pred.append(preds.detach().cpu().numpy())
+ test_true = np.concatenate(test_true)
+ test_pred = np.concatenate(test_pred)
+ test_acc = metrics.accuracy_score(test_true, test_pred)
+ avg_per_class_acc = metrics.balanced_accuracy_score(test_true, test_pred)
+ return {'acc': test_acc, 'avg_per_class_acc': avg_per_class_acc}
+
+
+ eval_corrupt_wrapper(model, test_corrupt, {'args': args})
diff --git a/zoo/CurveNet/core/main_normal.py b/zoo/CurveNet/core/main_normal.py
new file mode 100644
index 0000000..d700ade
--- /dev/null
+++ b/zoo/CurveNet/core/main_normal.py
@@ -0,0 +1,211 @@
+"""
+@Author: Tiange Xiang
+@Contact: txia7609@uni.sydney.edu.au
+@File: main_normal.py
+@Time: 2021/01/21 3:10 PM
+"""
+
+
+from __future__ import print_function
+import os
+import argparse
+import torch
+import torch.nn as nn
+import torch.nn.functional as F
+import torch.optim as optim
+from torch.optim.lr_scheduler import CosineAnnealingLR, MultiStepLR
+from data import ModelNetNormal
+from models.curvenet_normal import CurveNet
+import numpy as np
+from torch.utils.data import DataLoader
+from util import IOStream
+
+
+def _init_():
+ # fix random seed
+ torch.manual_seed(seed)
+ np.random.seed(seed)
+ torch.cuda.manual_seed_all(seed)
+ torch.cuda.manual_seed(seed)
+ torch.set_printoptions(10)
+ torch.backends.cudnn.benchmark = False
+ torch.backends.cudnn.deterministic = True
+ os.environ['PYTHONHASHSEED'] = str(seed)
+
+ # prepare file structures
+ if not os.path.exists('../checkpoints'):
+ os.makedirs('../checkpoints')
+ if not os.path.exists('../checkpoints/'+args.exp_name):
+ os.makedirs('../checkpoints/'+args.exp_name)
+ if not os.path.exists('../checkpoints/'+args.exp_name+'/'+'models'):
+ os.makedirs('../checkpoints/'+args.exp_name+'/'+'models')
+ os.system('cp main_normal.py ../checkpoints/'+args.exp_name+'/main_normal.py.backup')
+ os.system('cp models/curvenet_normal.py ../checkpoints/'+args.exp_name+'/curvenet_normal.py.backup')
+
+def train(args, io):
+ train_loader = DataLoader(ModelNetNormal(args.num_points, partition='train'),
+ num_workers=8, batch_size=args.batch_size, shuffle=True, drop_last=True)
+ test_loader = DataLoader(ModelNetNormal(args.num_points, partition='test'),
+ num_workers=8, batch_size=args.test_batch_size, shuffle=False, drop_last=False)
+
+ device = torch.device("cuda" if args.cuda else "cpu")
+
+ # create model
+ model = CurveNet(args.multiplier).to(device)
+ model = nn.DataParallel(model)
+ io.cprint("Let's use" + str(torch.cuda.device_count()) + "GPUs!")
+
+ if args.use_sgd:
+ io.cprint("Use SGD")
+ opt = optim.SGD(model.parameters(), lr=args.lr*100, momentum=args.momentum, weight_decay=1e-4)
+ else:
+ io.cprint("Use Adam")
+ opt = optim.Adam(model.parameters(), lr=args.lr, weight_decay=1e-4)
+
+ if args.scheduler == 'cos':
+ scheduler = CosineAnnealingLR(opt, args.epochs, eta_min=1e-3)
+ elif args.scheduler == 'step':
+ scheduler = MultiStepLR(opt, [140, 180], gamma=0.1)
+
+ criterion = torch.nn.CosineEmbeddingLoss()
+
+ best_test_loss = 99
+ for epoch in range(args.epochs):
+ ####################
+ # Train
+ ####################
+ train_loss = 0.0
+ count = 0.0
+ model.train()
+ for data, seg in train_loader:
+ data, seg = data.to(device), seg.to(device)
+ data = data.permute(0, 2, 1)
+ batch_size = data.size()[0]
+ opt.zero_grad()
+ seg_pred = model(data)
+ seg_pred = seg_pred.permute(0, 2, 1).contiguous()
+ #print(seg_pred.shape, seg.shape)
+ loss = criterion(seg_pred.view(-1, 3), seg.view(-1,3).squeeze(), torch.tensor(1).cuda())
+ loss.backward()
+ torch.nn.utils.clip_grad_norm_(model.parameters(), 1)
+ opt.step()
+ count += batch_size
+ train_loss += loss.item() * batch_size
+
+ if args.scheduler == 'cos':
+ scheduler.step()
+ elif args.scheduler == 'step':
+ if opt.param_groups[0]['lr'] > 1e-5:
+ scheduler.step()
+ if opt.param_groups[0]['lr'] < 1e-5:
+ for param_group in opt.param_groups:
+ param_group['lr'] = 1e-5
+
+ outstr = 'Train %d, loss: %.6f' % (epoch, train_loss/count)
+ io.cprint(outstr)
+
+ ####################
+ # Test
+ ####################
+ test_loss = 0.0
+ count = 0.0
+ model.eval()
+ for data, seg in test_loader:
+ data, seg = data.to(device), seg.to(device)
+ data = data.permute(0, 2, 1)
+ batch_size = data.size()[0]
+ seg_pred = model(data)
+ seg_pred = seg_pred.permute(0, 2, 1).contiguous()
+
+ loss = criterion(seg_pred.view(-1, 3), seg.view(-1,3).squeeze(), torch.tensor(1).cuda())
+ count += batch_size
+ test_loss += loss.item() * batch_size
+
+ if test_loss*1.0/count <= best_test_loss:
+ best_test_loss = test_loss*1.0/count
+ torch.save(model.state_dict(), '../checkpoints/%s/models/model.t7' % args.exp_name)
+ outstr = 'Test %d, loss: %.6f, best loss %.6f' % (epoch, test_loss/count, best_test_loss)
+ io.cprint(outstr)
+
+def test(args, io):
+ test_loader = DataLoader(ModelNetNormal(args.num_points, partition='test'),
+ batch_size=args.test_batch_size, shuffle=False, drop_last=False)
+
+ device = torch.device("cuda" if args.cuda else "cpu")
+
+ #Try to load models
+ model = CurveNet(args.multiplier).to(device)
+ model = nn.DataParallel(model)
+ model.load_state_dict(torch.load(args.model_path))
+
+ criterion = torch.nn.CosineEmbeddingLoss()
+
+ model = model.eval()
+ test_loss = 0.0
+ count = 0
+ for data, seg in test_loader:
+ data, seg = data.to(device), seg.to(device)
+ #print(data.shape, seg.shape)
+ data = data.permute(0, 2, 1)
+ batch_size = data.size()[0]
+ seg_pred = model(data)
+ seg_pred = seg_pred.permute(0, 2, 1).contiguous()
+ loss = criterion(seg_pred.view(-1, 3), seg.view(-1,3).squeeze(), torch.tensor(1).cuda())
+ count += batch_size
+ test_loss += loss.item() * batch_size
+ outstr = 'Test :: test loss: %.6f' % (test_loss*1.0/count)
+ io.cprint(outstr)
+
+
+if __name__ == "__main__":
+ # Training settings
+ parser = argparse.ArgumentParser(description='Point Cloud Part Segmentation')
+ parser.add_argument('--exp_name', type=str, default='exp', metavar='N',
+ help='Name of the experiment')
+ parser.add_argument('--batch_size', type=int, default=32, metavar='batch_size',
+ help='Size of batch)')
+ parser.add_argument('--test_batch_size', type=int, default=16, metavar='batch_size',
+ help='Size of batch)')
+ parser.add_argument('--epochs', type=int, default=200, metavar='N',
+ help='number of episode to train ')
+ parser.add_argument('--use_sgd', type=bool, default=True,
+ help='Use SGD')
+ parser.add_argument('--lr', type=float, default=0.0005, metavar='LR',
+ help='learning rate')
+ parser.add_argument('--multiplier', type=float, default=2.0, metavar='MP',
+ help='network expansion multiplier')
+ parser.add_argument('--momentum', type=float, default=0.9, metavar='M',
+ help='SGD momentum (default: 0.9)')
+ parser.add_argument('--scheduler', type=str, default='cos', metavar='N',
+ choices=['cos', 'step'],
+ help='Scheduler to use, [cos, step]')
+ parser.add_argument('--no_cuda', type=bool, default=False,
+ help='enables CUDA training')
+ parser.add_argument('--eval', type=bool, default=False,
+ help='evaluate the model')
+ parser.add_argument('--num_points', type=int, default=1024,
+ help='num of points to use')
+ parser.add_argument('--model_path', type=str, default='', metavar='N',
+ help='Pretrained model path')
+ args = parser.parse_args()
+
+ seed = np.random.randint(1, 10000)
+
+ _init_()
+
+ io = IOStream('../checkpoints/' + args.exp_name + '/run.log')
+ io.cprint(str(args))
+ io.cprint('random seed is: ' + str(seed))
+
+ args.cuda = not args.no_cuda and torch.cuda.is_available()
+ if args.cuda:
+ io.cprint(
+ 'Using GPU : ' + str(torch.cuda.current_device()) + ' from ' + str(torch.cuda.device_count()) + ' devices')
+ else:
+ io.cprint('Using CPU')
+
+ if not args.eval:
+ train(args, io)
+ else:
+ with torch.no_grad():
+ test(args, io)
diff --git a/zoo/CurveNet/core/main_partseg.py b/zoo/CurveNet/core/main_partseg.py
new file mode 100644
index 0000000..355682b
--- /dev/null
+++ b/zoo/CurveNet/core/main_partseg.py
@@ -0,0 +1,349 @@
+"""
+@Author: An Tao
+@Contact: ta19@mails.tsinghua.edu.cn
+@File: main_partseg.py
+@Time: 2019/12/31 11:17 AM
+
+Modified by
+@Author: Tiange Xiang
+@Contact: txia7609@uni.sydney.edu.au
+@Time: 2021/01/21 3:10 PM
+"""
+
+
+from __future__ import print_function
+import os
+import argparse
+import torch
+import torch.nn as nn
+import torch.nn.functional as F
+import torch.optim as optim
+from torch.optim.lr_scheduler import CosineAnnealingLR, StepLR, MultiStepLR
+from data import ShapeNetPart
+from models.curvenet_seg import CurveNet
+import numpy as np
+from torch.utils.data import DataLoader
+from util import cal_loss, IOStream
+import sklearn.metrics as metrics
+
+seg_num = [4, 2, 2, 4, 4, 3, 3, 2, 4, 2, 6, 2, 3, 3, 3, 3]
+index_start = [0, 4, 6, 8, 12, 16, 19, 22, 24, 28, 30, 36, 38, 41, 44, 47]
+
+def _init_():
+ # fix random seed
+ torch.manual_seed(seed)
+ np.random.seed(seed)
+ torch.cuda.manual_seed_all(seed)
+ torch.cuda.manual_seed(seed)
+ torch.set_printoptions(10)
+ torch.backends.cudnn.benchmark = False
+ torch.backends.cudnn.deterministic = True
+ os.environ['PYTHONHASHSEED'] = str(seed)
+
+ # prepare file structures
+ if not os.path.exists('../checkpoints'):
+ os.makedirs('../checkpoints')
+ if not os.path.exists('../checkpoints/'+args.exp_name):
+ os.makedirs('../checkpoints/'+args.exp_name)
+ if not os.path.exists('../checkpoints/'+args.exp_name+'/'+'models'):
+ os.makedirs('../checkpoints/'+args.exp_name+'/'+'models')
+ os.system('cp main_partseg.py ../checkpoints/'+args.exp_name+'/main_partseg.py.backup')
+ os.system('cp models/curvenet_seg.py ../checkpoints/'+args.exp_name+'/curvenet_seg.py.backup')
+
+def calculate_shape_IoU(pred_np, seg_np, label, class_choice, eva=False):
+ label = label.squeeze()
+ shape_ious = []
+ category = {}
+ for shape_idx in range(seg_np.shape[0]):
+ if not class_choice:
+ start_index = index_start[label[shape_idx]]
+ num = seg_num[label[shape_idx]]
+ parts = range(start_index, start_index + num)
+ else:
+ parts = range(seg_num[label[0]])
+ part_ious = []
+ for part in parts:
+ I = np.sum(np.logical_and(pred_np[shape_idx] == part, seg_np[shape_idx] == part))
+ U = np.sum(np.logical_or(pred_np[shape_idx] == part, seg_np[shape_idx] == part))
+ if U == 0:
+ iou = 1 # If the union of groundtruth and prediction points is empty, then count part IoU as 1
+ else:
+ iou = I / float(U)
+ part_ious.append(iou)
+ shape_ious.append(np.mean(part_ious))
+ if label[shape_idx] not in category:
+ category[label[shape_idx]] = [shape_ious[-1]]
+ else:
+ category[label[shape_idx]].append(shape_ious[-1])
+
+ if eva:
+ return shape_ious, category
+ else:
+ return shape_ious
+
+def train(args, io):
+ train_dataset = ShapeNetPart(partition='trainval', num_points=args.num_points, class_choice=args.class_choice)
+ if (len(train_dataset) < 100):
+ drop_last = False
+ else:
+ drop_last = True
+ train_loader = DataLoader(train_dataset, num_workers=8, batch_size=args.batch_size, shuffle=True, drop_last=drop_last)
+ test_loader = DataLoader(ShapeNetPart(partition='test', num_points=args.num_points, class_choice=args.class_choice),
+ num_workers=8, batch_size=args.test_batch_size, shuffle=False, drop_last=False)
+
+ device = torch.device("cuda" if args.cuda else "cpu")
+ io.cprint("Let's use" + str(torch.cuda.device_count()) + "GPUs!")
+
+ seg_num_all = train_loader.dataset.seg_num_all
+ seg_start_index = train_loader.dataset.seg_start_index
+
+ # create model
+ model = CurveNet().to(device)
+ model = nn.DataParallel(model)
+
+ if args.use_sgd:
+ print("Use SGD")
+ opt = optim.SGD(model.parameters(), lr=args.lr*100, momentum=args.momentum, weight_decay=1e-4)
+ else:
+ print("Use Adam")
+ opt = optim.Adam(model.parameters(), lr=args.lr, weight_decay=1e-4)
+
+ if args.scheduler == 'cos':
+ scheduler = CosineAnnealingLR(opt, args.epochs, eta_min=1e-3)
+ elif args.scheduler == 'step':
+ scheduler = MultiStepLR(opt, [140, 180], gamma=0.1)
+ criterion = cal_loss
+
+ best_test_iou = 0
+ for epoch in range(args.epochs):
+ ####################
+ # Train
+ ####################
+ train_loss = 0.0
+ count = 0.0
+ model.train()
+ train_true_cls = []
+ train_pred_cls = []
+ train_true_seg = []
+ train_pred_seg = []
+ train_label_seg = []
+ for data, label, seg in train_loader:
+ seg = seg - seg_start_index
+ label_one_hot = np.zeros((label.shape[0], 16))
+ for idx in range(label.shape[0]):
+ label_one_hot[idx, label[idx]] = 1
+ label_one_hot = torch.from_numpy(label_one_hot.astype(np.float32))
+ data, label_one_hot, seg = data.to(device), label_one_hot.to(device), seg.to(device)
+ data = data.permute(0, 2, 1)
+ batch_size = data.size()[0]
+ opt.zero_grad()
+ seg_pred = model(data, label_one_hot)
+ seg_pred = seg_pred.permute(0, 2, 1).contiguous()
+ loss = criterion(seg_pred.view(-1, seg_num_all), seg.view(-1,1).squeeze())
+ loss.backward()
+ torch.nn.utils.clip_grad_norm_(model.parameters(), 1)
+ opt.step()
+ pred = seg_pred.max(dim=2)[1] # (batch_size, num_points)
+ count += batch_size
+ train_loss += loss.item() * batch_size
+ seg_np = seg.cpu().numpy() # (batch_size, num_points)
+ pred_np = pred.detach().cpu().numpy() # (batch_size, num_points)
+ train_true_cls.append(seg_np.reshape(-1)) # (batch_size * num_points)
+ train_pred_cls.append(pred_np.reshape(-1)) # (batch_size * num_points)
+ train_true_seg.append(seg_np)
+ train_pred_seg.append(pred_np)
+ train_label_seg.append(label.reshape(-1))
+ if args.scheduler == 'cos':
+ scheduler.step()
+ elif args.scheduler == 'step':
+ if opt.param_groups[0]['lr'] > 1e-5:
+ scheduler.step()
+ if opt.param_groups[0]['lr'] < 1e-5:
+ for param_group in opt.param_groups:
+ param_group['lr'] = 1e-5
+ train_true_cls = np.concatenate(train_true_cls)
+ train_pred_cls = np.concatenate(train_pred_cls)
+ train_acc = metrics.accuracy_score(train_true_cls, train_pred_cls)
+ avg_per_class_acc = metrics.balanced_accuracy_score(train_true_cls, train_pred_cls)
+ train_true_seg = np.concatenate(train_true_seg, axis=0)
+ train_pred_seg = np.concatenate(train_pred_seg, axis=0)
+ train_label_seg = np.concatenate(train_label_seg)
+ train_ious = calculate_shape_IoU(train_pred_seg, train_true_seg, train_label_seg, args.class_choice)
+ outstr = 'Train %d, loss: %.6f, train acc: %.6f, train avg acc: %.6f, train iou: %.6f' % (epoch,
+ train_loss*1.0/count,
+ train_acc,
+ avg_per_class_acc,
+ np.mean(train_ious))
+ io.cprint(outstr)
+
+ ####################
+ # Test
+ ####################
+ test_loss = 0.0
+ count = 0.0
+ model.eval()
+ test_true_cls = []
+ test_pred_cls = []
+ test_true_seg = []
+ test_pred_seg = []
+ test_label_seg = []
+ for data, label, seg in test_loader:
+ seg = seg - seg_start_index
+ label_one_hot = np.zeros((label.shape[0], 16))
+ for idx in range(label.shape[0]):
+ label_one_hot[idx, label[idx]] = 1
+ label_one_hot = torch.from_numpy(label_one_hot.astype(np.float32))
+ data, label_one_hot, seg = data.to(device), label_one_hot.to(device), seg.to(device)
+ data = data.permute(0, 2, 1)
+ batch_size = data.size()[0]
+ seg_pred = model(data, label_one_hot)
+ seg_pred = seg_pred.permute(0, 2, 1).contiguous()
+ loss = criterion(seg_pred.view(-1, seg_num_all), seg.view(-1,1).squeeze())
+ pred = seg_pred.max(dim=2)[1]
+ count += batch_size
+ test_loss += loss.item() * batch_size
+ seg_np = seg.cpu().numpy()
+ pred_np = pred.detach().cpu().numpy()
+ test_true_cls.append(seg_np.reshape(-1))
+ test_pred_cls.append(pred_np.reshape(-1))
+ test_true_seg.append(seg_np)
+ test_pred_seg.append(pred_np)
+ test_label_seg.append(label.reshape(-1))
+ test_true_cls = np.concatenate(test_true_cls)
+ test_pred_cls = np.concatenate(test_pred_cls)
+ test_acc = metrics.accuracy_score(test_true_cls, test_pred_cls)
+ avg_per_class_acc = metrics.balanced_accuracy_score(test_true_cls, test_pred_cls)
+ test_true_seg = np.concatenate(test_true_seg, axis=0)
+ test_pred_seg = np.concatenate(test_pred_seg, axis=0)
+ test_label_seg = np.concatenate(test_label_seg)
+ test_ious = calculate_shape_IoU(test_pred_seg, test_true_seg, test_label_seg, args.class_choice)
+ outstr = 'Test %d, loss: %.6f, test acc: %.6f, test avg acc: %.6f, test iou: %.6f, best iou %.6f' % (epoch,
+ test_loss*1.0/count,
+ test_acc,
+ avg_per_class_acc,
+ np.mean(test_ious), best_test_iou)
+ io.cprint(outstr)
+ if np.mean(test_ious) >= best_test_iou:
+ best_test_iou = np.mean(test_ious)
+ torch.save(model.state_dict(), '../checkpoints/%s/models/model.t7' % args.exp_name)
+
+
+def test(args, io):
+ test_loader = DataLoader(ShapeNetPart(partition='test', num_points=args.num_points, class_choice=args.class_choice),
+ batch_size=args.test_batch_size, shuffle=True, drop_last=False)
+
+ device = torch.device("cuda" if args.cuda else "cpu")
+
+ #Try to load models
+ seg_start_index = test_loader.dataset.seg_start_index
+ model = CurveNet().to(device)
+ model = nn.DataParallel(model)
+ model.load_state_dict(torch.load(args.model_path))
+
+ model = model.eval()
+ test_acc = 0.0
+ test_true_cls = []
+ test_pred_cls = []
+ test_true_seg = []
+ test_pred_seg = []
+ test_label_seg = []
+ category = {}
+ for data, label, seg in test_loader:
+ seg = seg - seg_start_index
+ label_one_hot = np.zeros((label.shape[0], 16))
+ for idx in range(label.shape[0]):
+ label_one_hot[idx, label[idx]] = 1
+ label_one_hot = torch.from_numpy(label_one_hot.astype(np.float32))
+ data, label_one_hot, seg = data.to(device), label_one_hot.to(device), seg.to(device)
+ data = data.permute(0, 2, 1)
+ seg_pred = model(data, label_one_hot)
+ seg_pred = seg_pred.permute(0, 2, 1).contiguous()
+ pred = seg_pred.max(dim=2)[1]
+ seg_np = seg.cpu().numpy()
+ pred_np = pred.detach().cpu().numpy()
+ test_true_cls.append(seg_np.reshape(-1))
+ test_pred_cls.append(pred_np.reshape(-1))
+ test_true_seg.append(seg_np)
+ test_pred_seg.append(pred_np)
+ test_label_seg.append(label.reshape(-1))
+
+ test_true_cls = np.concatenate(test_true_cls)
+ test_pred_cls = np.concatenate(test_pred_cls)
+ test_acc = metrics.accuracy_score(test_true_cls, test_pred_cls)
+ avg_per_class_acc = metrics.balanced_accuracy_score(test_true_cls, test_pred_cls)
+ test_true_seg = np.concatenate(test_true_seg, axis=0)
+ test_pred_seg = np.concatenate(test_pred_seg, axis=0)
+ test_label_seg = np.concatenate(test_label_seg)
+ test_ious,category = calculate_shape_IoU(test_pred_seg, test_true_seg, test_label_seg, args.class_choice, eva=True)
+ outstr = 'Test :: test acc: %.6f, test avg acc: %.6f, test iou: %.6f' % (test_acc,
+ avg_per_class_acc,
+ np.mean(test_ious))
+ io.cprint(outstr)
+ results = []
+ for key in category.keys():
+ results.append((int(key), np.mean(category[key]), len(category[key])))
+ results.sort(key=lambda x:x[0])
+ for re in results:
+ io.cprint('idx: %d mIoU: %.3f num: %d' % (re[0], re[1], re[2]))
+
+
+if __name__ == "__main__":
+ # Training settings
+ parser = argparse.ArgumentParser(description='Point Cloud Part Segmentation')
+ parser.add_argument('--exp_name', type=str, default='exp', metavar='N',
+ help='Name of the experiment')
+ parser.add_argument('--dataset', type=str, default='shapenetpart', metavar='N',
+ choices=['shapenetpart'])
+ parser.add_argument('--class_choice', type=str, default=None, metavar='N',
+ choices=['airplane', 'bag', 'cap', 'car', 'chair',
+ 'earphone', 'guitar', 'knife', 'lamp', 'laptop',
+ 'motor', 'mug', 'pistol', 'rocket', 'skateboard', 'table'])
+ parser.add_argument('--batch_size', type=int, default=32, metavar='batch_size',
+ help='Size of batch)')
+ parser.add_argument('--test_batch_size', type=int, default=16, metavar='batch_size',
+ help='Size of batch)')
+ parser.add_argument('--epochs', type=int, default=200, metavar='N',
+ help='number of episode to train ')
+ parser.add_argument('--use_sgd', type=bool, default=True,
+ help='Use SGD')
+ parser.add_argument('--lr', type=float, default=0.0005, metavar='LR',
+ help='learning rate (default: 0.001, 0.1 if using sgd)')
+ parser.add_argument('--momentum', type=float, default=0.9, metavar='M',
+ help='SGD momentum (default: 0.9)')
+ parser.add_argument('--scheduler', type=str, default='step', metavar='N',
+ choices=['cos', 'step'],
+ help='Scheduler to use, [cos, step]')
+ parser.add_argument('--no_cuda', type=bool, default=False,
+ help='enables CUDA training')
+ parser.add_argument('--eval', type=bool, default=False,
+ help='evaluate the model')
+ parser.add_argument('--num_points', type=int, default=2048,
+ help='num of points to use')
+ parser.add_argument('--model_path', type=str, default='', metavar='N',
+ help='Pretrained model path')
+ args = parser.parse_args()
+
+ seed = np.random.randint(1, 10000)
+
+ _init_()
+
+ if args.eval:
+ io = IOStream('../checkpoints/' + args.exp_name + '/eval.log')
+ else:
+ io = IOStream('../checkpoints/' + args.exp_name + '/run.log')
+ io.cprint(str(args))
+ io.cprint('random seed is: ' + str(seed))
+
+ args.cuda = not args.no_cuda and torch.cuda.is_available()
+
+ if args.cuda:
+ io.cprint(
+ 'Using GPU : ' + str(torch.cuda.current_device()) + ' from ' + str(torch.cuda.device_count()) + ' devices')
+ else:
+ io.cprint('Using CPU')
+
+ if not args.eval:
+ train(args, io)
+ else:
+ with torch.no_grad():
+ test(args, io)
diff --git a/zoo/CurveNet/core/models/curvenet_cls.py b/zoo/CurveNet/core/models/curvenet_cls.py
new file mode 100644
index 0000000..41960d1
--- /dev/null
+++ b/zoo/CurveNet/core/models/curvenet_cls.py
@@ -0,0 +1,72 @@
+"""
+@Author: Tiange Xiang
+@Contact: txia7609@uni.sydney.edu.au
+@File: curvenet_cls.py
+@Time: 2021/01/21 3:10 PM
+"""
+
+import torch.nn as nn
+import torch.nn.functional as F
+from .curvenet_util import *
+
+
+curve_config = {
+ 'default': [[100, 5], [100, 5], None, None],
+ 'long': [[10, 30], None, None, None]
+ }
+
+class CurveNet(nn.Module):
+ def __init__(self, num_classes=40, k=20, setting='default'):
+ super(CurveNet, self).__init__()
+
+ assert setting in curve_config
+
+ additional_channel = 32
+ self.lpfa = LPFA(9, additional_channel, k=k, mlp_num=1, initial=True)
+
+ # encoder
+ self.cic11 = CIC(npoint=1024, radius=0.05, k=k, in_channels=additional_channel, output_channels=64, bottleneck_ratio=2, mlp_num=1, curve_config=curve_config[setting][0])
+ self.cic12 = CIC(npoint=1024, radius=0.05, k=k, in_channels=64, output_channels=64, bottleneck_ratio=4, mlp_num=1, curve_config=curve_config[setting][0])
+
+ self.cic21 = CIC(npoint=1024, radius=0.05, k=k, in_channels=64, output_channels=128, bottleneck_ratio=2, mlp_num=1, curve_config=curve_config[setting][1])
+ self.cic22 = CIC(npoint=1024, radius=0.1, k=k, in_channels=128, output_channels=128, bottleneck_ratio=4, mlp_num=1, curve_config=curve_config[setting][1])
+
+ self.cic31 = CIC(npoint=256, radius=0.1, k=k, in_channels=128, output_channels=256, bottleneck_ratio=2, mlp_num=1, curve_config=curve_config[setting][2])
+ self.cic32 = CIC(npoint=256, radius=0.2, k=k, in_channels=256, output_channels=256, bottleneck_ratio=4, mlp_num=1, curve_config=curve_config[setting][2])
+
+ self.cic41 = CIC(npoint=64, radius=0.2, k=k, in_channels=256, output_channels=512, bottleneck_ratio=2, mlp_num=1, curve_config=curve_config[setting][3])
+ self.cic42 = CIC(npoint=64, radius=0.4, k=k, in_channels=512, output_channels=512, bottleneck_ratio=4, mlp_num=1, curve_config=curve_config[setting][3])
+
+ self.conv0 = nn.Sequential(
+ nn.Conv1d(512, 1024, kernel_size=1, bias=False),
+ nn.BatchNorm1d(1024),
+ nn.ReLU(inplace=True))
+ self.conv1 = nn.Linear(1024 * 2, 512, bias=False)
+ self.conv2 = nn.Linear(512, num_classes)
+ self.bn1 = nn.BatchNorm1d(512)
+ self.dp1 = nn.Dropout(p=0.5)
+
+ def forward(self, xyz):
+ l0_points = self.lpfa(xyz, xyz)
+
+ l1_xyz, l1_points = self.cic11(xyz, l0_points)
+ l1_xyz, l1_points = self.cic12(l1_xyz, l1_points)
+
+ l2_xyz, l2_points = self.cic21(l1_xyz, l1_points)
+ l2_xyz, l2_points = self.cic22(l2_xyz, l2_points)
+
+ l3_xyz, l3_points = self.cic31(l2_xyz, l2_points)
+ l3_xyz, l3_points = self.cic32(l3_xyz, l3_points)
+
+ l4_xyz, l4_points = self.cic41(l3_xyz, l3_points)
+ l4_xyz, l4_points = self.cic42(l4_xyz, l4_points)
+
+ x = self.conv0(l4_points)
+ x_max = F.adaptive_max_pool1d(x, 1)
+ x_avg = F.adaptive_avg_pool1d(x, 1)
+
+ x = torch.cat((x_max, x_avg), dim=1).squeeze(-1)
+ x = F.relu(self.bn1(self.conv1(x).unsqueeze(-1)), inplace=True).squeeze(-1)
+ x = self.dp1(x)
+ x = self.conv2(x)
+ return x
diff --git a/zoo/CurveNet/core/models/curvenet_normal.py b/zoo/CurveNet/core/models/curvenet_normal.py
new file mode 100644
index 0000000..dc88575
--- /dev/null
+++ b/zoo/CurveNet/core/models/curvenet_normal.py
@@ -0,0 +1,88 @@
+"""
+@Author: Tiange Xiang
+@Contact: txia7609@uni.sydney.edu.au
+@File: curvenet_normal.py
+@Time: 2021/01/21 3:10 PM
+"""
+
+import torch.nn as nn
+import torch.nn.functional as F
+from .curvenet_util import *
+
+
+curve_config = {
+ 'default': [[100, 5], [100, 5], None, None]
+ }
+
+class CurveNet(nn.Module):
+ def __init__(self, num_classes=3, k=20, multiplier=1.0, setting='default'):
+ super(CurveNet, self).__init__()
+
+ assert setting in curve_config
+
+ additional_channel = 64
+ channels = [128, 256, 512, 1024]
+ channels = [int(c * multiplier) for c in channels]
+
+ self.lpfa = LPFA(9, additional_channel, k=k, mlp_num=1, initial=True)
+
+ # encoder
+ self.cic11 = CIC(npoint=1024, radius=0.1, k=k, in_channels=additional_channel, output_channels=channels[0], bottleneck_ratio=2, curve_config=curve_config[setting][0])
+ self.cic12 = CIC(npoint=1024, radius=0.1, k=k, in_channels=channels[0], output_channels=channels[0], bottleneck_ratio=4, curve_config=curve_config[setting][0])
+
+ self.cic21 = CIC(npoint=256, radius=0.2, k=k, in_channels=channels[0], output_channels=channels[1], bottleneck_ratio=2, curve_config=curve_config[setting][1])
+ self.cic22 = CIC(npoint=256, radius=0.2, k=k, in_channels=channels[1], output_channels=channels[1], bottleneck_ratio=4, curve_config=curve_config[setting][1])
+
+ self.cic31 = CIC(npoint=64, radius=0.4, k=k, in_channels=channels[1], output_channels=channels[2], bottleneck_ratio=2, curve_config=curve_config[setting][2])
+ self.cic32 = CIC(npoint=64, radius=0.4, k=k, in_channels=channels[2], output_channels=channels[2], bottleneck_ratio=4, curve_config=curve_config[setting][2])
+
+ self.cic41 = CIC(npoint=16, radius=0.8, k=15, in_channels=channels[2], output_channels=channels[3], bottleneck_ratio=2, curve_config=curve_config[setting][3])
+ self.cic42 = CIC(npoint=16, radius=0.8, k=15, in_channels=channels[3], output_channels=channels[3], bottleneck_ratio=4, curve_config=curve_config[setting][3])
+ #self.cic43 = CIC(npoint=16, radius=0.8, k=15, in_channels=2048, output_channels=2048, bottleneck_ratio=4, curve_config=curve_config[setting][3])
+ # decoder
+ self.fp3 = PointNetFeaturePropagation(in_channel=channels[3] + channels[2], mlp=[channels[2], channels[2]], att=[channels[3], channels[3]//2, channels[3]//8])
+ self.up_cic4 = CIC(npoint=64, radius=0.8, k=k, in_channels=channels[2], output_channels=channels[2], bottleneck_ratio=4)
+
+ self.fp2 = PointNetFeaturePropagation(in_channel=channels[2] + channels[1], mlp=[channels[1], channels[1]], att=[channels[2], channels[2]//2, channels[2]//8])
+ self.up_cic3 = CIC(npoint=256, radius=0.4, k=k, in_channels=channels[1], output_channels=channels[1], bottleneck_ratio=4)
+
+ self.fp1 = PointNetFeaturePropagation(in_channel=channels[1] + channels[0], mlp=[channels[0], channels[0]], att=[channels[1], channels[1]//2, channels[1]//8])
+ self.up_cic2 = CIC(npoint=1024, radius=0.1, k=k, in_channels=channels[0]+3, output_channels=channels[0], bottleneck_ratio=4)
+ self.up_cic1 = CIC(npoint=1024, radius=0.1, k=k, in_channels=channels[0], output_channels=channels[0], bottleneck_ratio=4)
+
+ self.point_conv = nn.Sequential(
+ nn.Conv2d(9, additional_channel, kernel_size=1, bias=False),
+ nn.BatchNorm2d(additional_channel),
+ nn.LeakyReLU(negative_slope=0.2, inplace=True))
+
+ self.conv1 = nn.Conv1d(channels[0], num_classes, 1)
+
+ def forward(self, xyz):
+ l0_points = self.lpfa(xyz, xyz)
+
+ l1_xyz, l1_points = self.cic11(xyz, l0_points)
+ l1_xyz, l1_points = self.cic12(l1_xyz, l1_points)
+
+ l2_xyz, l2_points = self.cic21(l1_xyz, l1_points)
+ l2_xyz, l2_points = self.cic22(l2_xyz, l2_points)
+
+ l3_xyz, l3_points = self.cic31(l2_xyz, l2_points)
+ l3_xyz, l3_points = self.cic32(l3_xyz, l3_points)
+
+ l4_xyz, l4_points = self.cic41(l3_xyz, l3_points)
+ l4_xyz, l4_points = self.cic42(l4_xyz, l4_points)
+ #l4_xyz, l4_points = self.cic43(l4_xyz, l4_points)
+
+ l3_points = self.fp3(l3_xyz, l4_xyz, l3_points, l4_points)
+ l3_xyz, l3_points = self.up_cic4(l3_xyz, l3_points)
+ l2_points = self.fp2(l2_xyz, l3_xyz, l2_points, l3_points)
+ l2_xyz, l2_points = self.up_cic3(l2_xyz, l2_points)
+ l1_points = self.fp1(l1_xyz, l2_xyz, l1_points, l2_points)
+
+ x = torch.cat((l1_xyz, l1_points), dim=1)
+
+ xyz, x = self.up_cic2(l1_xyz, x)
+ xyz, x = self.up_cic1(xyz, x)
+
+ x = self.conv1(x)
+ return x
diff --git a/zoo/CurveNet/core/models/curvenet_seg.py b/zoo/CurveNet/core/models/curvenet_seg.py
new file mode 100644
index 0000000..5a8ca91
--- /dev/null
+++ b/zoo/CurveNet/core/models/curvenet_seg.py
@@ -0,0 +1,131 @@
+"""
+@Author: Tiange Xiang
+@Contact: txia7609@uni.sydney.edu.au
+@File: curvenet_seg.py
+@Time: 2021/01/21 3:10 PM
+"""
+
+import torch.nn as nn
+import torch.nn.functional as F
+from .curvenet_util import *
+
+
+curve_config = {
+ 'default': [[100, 5], [100, 5], None, None, None]
+ }
+
+class CurveNet(nn.Module):
+ def __init__(self, num_classes=50, category=16, k=32, setting='default'):
+ super(CurveNet, self).__init__()
+
+ assert setting in curve_config
+
+ additional_channel = 32
+ self.lpfa = LPFA(9, additional_channel, k=k, mlp_num=1, initial=True)
+
+ # encoder
+ self.cic11 = CIC(npoint=2048, radius=0.2, k=k, in_channels=additional_channel, output_channels=64, bottleneck_ratio=2, curve_config=curve_config[setting][0])
+ self.cic12 = CIC(npoint=2048, radius=0.2, k=k, in_channels=64, output_channels=64, bottleneck_ratio=4, curve_config=curve_config[setting][0])
+
+ self.cic21 = CIC(npoint=512, radius=0.4, k=k, in_channels=64, output_channels=128, bottleneck_ratio=2, curve_config=curve_config[setting][1])
+ self.cic22 = CIC(npoint=512, radius=0.4, k=k, in_channels=128, output_channels=128, bottleneck_ratio=4, curve_config=curve_config[setting][1])
+
+ self.cic31 = CIC(npoint=128, radius=0.8, k=k, in_channels=128, output_channels=256, bottleneck_ratio=2, curve_config=curve_config[setting][2])
+ self.cic32 = CIC(npoint=128, radius=0.8, k=k, in_channels=256, output_channels=256, bottleneck_ratio=4, curve_config=curve_config[setting][2])
+
+ self.cic41 = CIC(npoint=32, radius=1.2, k=31, in_channels=256, output_channels=512, bottleneck_ratio=2, curve_config=curve_config[setting][3])
+ self.cic42 = CIC(npoint=32, radius=1.2, k=31, in_channels=512, output_channels=512, bottleneck_ratio=4, curve_config=curve_config[setting][3])
+
+ self.cic51 = CIC(npoint=8, radius=2.0, k=7, in_channels=512, output_channels=1024, bottleneck_ratio=2, curve_config=curve_config[setting][4])
+ self.cic52 = CIC(npoint=8, radius=2.0, k=7, in_channels=1024, output_channels=1024, bottleneck_ratio=4, curve_config=curve_config[setting][4])
+ self.cic53 = CIC(npoint=8, radius=2.0, k=7, in_channels=1024, output_channels=1024, bottleneck_ratio=4, curve_config=curve_config[setting][4])
+
+ # decoder
+ self.fp4 = PointNetFeaturePropagation(in_channel=1024 + 512, mlp=[512, 512], att=[1024, 512, 256])
+ self.up_cic5 = CIC(npoint=32, radius=1.2, k=31, in_channels=512, output_channels=512, bottleneck_ratio=4)
+
+ self.fp3 = PointNetFeaturePropagation(in_channel=512 + 256, mlp=[256, 256], att=[512, 256, 128])
+ self.up_cic4 = CIC(npoint=128, radius=0.8, k=k, in_channels=256, output_channels=256, bottleneck_ratio=4)
+
+ self.fp2 = PointNetFeaturePropagation(in_channel=256 + 128, mlp=[128, 128], att=[256, 128, 64])
+ self.up_cic3 = CIC(npoint=512, radius=0.4, k=k, in_channels=128, output_channels=128, bottleneck_ratio=4)
+
+ self.fp1 = PointNetFeaturePropagation(in_channel=128 + 64, mlp=[64, 64], att=[128, 64, 32])
+ self.up_cic2 = CIC(npoint=2048, radius=0.2, k=k, in_channels=128+64+64+category+3, output_channels=256, bottleneck_ratio=4)
+ self.up_cic1 = CIC(npoint=2048, radius=0.2, k=k, in_channels=256, output_channels=256, bottleneck_ratio=4)
+
+
+ self.global_conv2 = nn.Sequential(
+ nn.Conv1d(1024, 128, kernel_size=1, bias=False),
+ nn.BatchNorm1d(128),
+ nn.LeakyReLU(negative_slope=0.2))
+ self.global_conv1 = nn.Sequential(
+ nn.Conv1d(512, 64, kernel_size=1, bias=False),
+ nn.BatchNorm1d(64),
+ nn.LeakyReLU(negative_slope=0.2))
+
+ self.conv1 = nn.Conv1d(256, 256, 1, bias=False)
+ self.bn1 = nn.BatchNorm1d(256)
+ self.drop1 = nn.Dropout(0.5)
+ self.conv2 = nn.Conv1d(256, num_classes, 1)
+ self.se = nn.Sequential(nn.AdaptiveAvgPool1d(1),
+ nn.Conv1d(256, 256//8, 1, bias=False),
+ nn.BatchNorm1d(256//8),
+ nn.LeakyReLU(negative_slope=0.2),
+ nn.Conv1d(256//8, 256, 1, bias=False),
+ nn.Sigmoid())
+
+ def forward(self, xyz, l=None):
+ batch_size = xyz.size(0)
+
+ l0_points = self.lpfa(xyz, xyz)
+
+ l1_xyz, l1_points = self.cic11(xyz, l0_points)
+ l1_xyz, l1_points = self.cic12(l1_xyz, l1_points)
+
+ l2_xyz, l2_points = self.cic21(l1_xyz, l1_points)
+ l2_xyz, l2_points = self.cic22(l2_xyz, l2_points)
+
+ l3_xyz, l3_points = self.cic31(l2_xyz, l2_points)
+ l3_xyz, l3_points = self.cic32(l3_xyz, l3_points)
+
+ l4_xyz, l4_points = self.cic41(l3_xyz, l3_points)
+ l4_xyz, l4_points = self.cic42(l4_xyz, l4_points)
+
+ l5_xyz, l5_points = self.cic51(l4_xyz, l4_points)
+ l5_xyz, l5_points = self.cic52(l5_xyz, l5_points)
+ l5_xyz, l5_points = self.cic53(l5_xyz, l5_points)
+
+ # global features
+ emb1 = self.global_conv1(l4_points)
+ emb1 = emb1.max(dim=-1, keepdim=True)[0] # bs, 64, 1
+ emb2 = self.global_conv2(l5_points)
+ emb2 = emb2.max(dim=-1, keepdim=True)[0] # bs, 128, 1
+
+ # Feature Propagation layers
+ l4_points = self.fp4(l4_xyz, l5_xyz, l4_points, l5_points)
+ l4_xyz, l4_points = self.up_cic5(l4_xyz, l4_points)
+
+ l3_points = self.fp3(l3_xyz, l4_xyz, l3_points, l4_points)
+ l3_xyz, l3_points = self.up_cic4(l3_xyz, l3_points)
+
+ l2_points = self.fp2(l2_xyz, l3_xyz, l2_points, l3_points)
+ l2_xyz, l2_points = self.up_cic3(l2_xyz, l2_points)
+
+ l1_points = self.fp1(l1_xyz, l2_xyz, l1_points, l2_points)
+
+ if l is not None:
+ l = l.view(batch_size, -1, 1)
+ emb = torch.cat((emb1, emb2, l), dim=1) # bs, 128 + 16, 1
+ l = emb.expand(-1,-1, xyz.size(-1))
+ x = torch.cat((l1_xyz, l1_points, l), dim=1)
+
+ xyz, x = self.up_cic2(l1_xyz, x)
+ xyz, x = self.up_cic1(xyz, x)
+
+ x = F.leaky_relu(self.bn1(self.conv1(x)), 0.2, inplace=True)
+ se = self.se(x)
+ x = x * se
+ x = self.drop1(x)
+ x = self.conv2(x)
+ return x
diff --git a/zoo/CurveNet/core/models/curvenet_util.py b/zoo/CurveNet/core/models/curvenet_util.py
new file mode 100644
index 0000000..c9c1d91
--- /dev/null
+++ b/zoo/CurveNet/core/models/curvenet_util.py
@@ -0,0 +1,488 @@
+"""
+@Author: Yue Wang
+@Contact: yuewangx@mit.edu
+@File: pointnet_util.py
+@Time: 2018/10/13 10:39 PM
+
+Modified by
+@Author: Tiange Xiang
+@Contact: txia7609@uni.sydney.edu.au
+@Time: 2021/01/21 3:10 PM
+"""
+
+import torch
+import torch.nn as nn
+import torch.nn.functional as F
+from time import time
+import numpy as np
+
+from .walk import Walk
+
+
+def knn(x, k):
+ k = k + 1
+ inner = -2 * torch.matmul(x.transpose(2, 1), x)
+ xx = torch.sum(x**2, dim=1, keepdim=True)
+ pairwise_distance = -xx - inner - xx.transpose(2, 1)
+
+ idx = pairwise_distance.topk(k=k, dim=-1)[1] # (batch_size, num_points, k)
+ return idx
+
+def normal_knn(x, k):
+ inner = -2 * torch.matmul(x.transpose(2, 1), x)
+ xx = torch.sum(x**2, dim=1, keepdim=True)
+ pairwise_distance = -xx - inner - xx.transpose(2, 1)
+
+ idx = pairwise_distance.topk(k=k, dim=-1)[1] # (batch_size, num_points, k)
+ return idx
+
+def pc_normalize(pc):
+ l = pc.shape[0]
+ centroid = np.mean(pc, axis=0)
+ pc = pc - centroid
+ m = np.max(np.sqrt(np.sum(pc**2, axis=1)))
+ pc = pc / m
+ return pc
+
+def square_distance(src, dst):
+ """
+ Calculate Euclid distance between each two points.
+ """
+ B, N, _ = src.shape
+ _, M, _ = dst.shape
+ dist = -2 * torch.matmul(src, dst.permute(0, 2, 1))
+ dist += torch.sum(src ** 2, -1).view(B, N, 1)
+ dist += torch.sum(dst ** 2, -1).view(B, 1, M)
+ return dist
+
+def index_points(points, idx):
+ """
+
+ Input:
+ points: input points data, [B, N, C]
+ idx: sample index data, [B, S]
+ Return:
+ new_points:, indexed points data, [B, S, C]
+ """
+ device = points.device
+ B = points.shape[0]
+ view_shape = list(idx.shape)
+ view_shape[1:] = [1] * (len(view_shape) - 1)
+ repeat_shape = list(idx.shape)
+ repeat_shape[0] = 1
+ batch_indices = torch.arange(B, dtype=torch.long).to(device).view(view_shape).repeat(repeat_shape)
+ new_points = points[batch_indices, idx, :]
+ return new_points
+
+
+def farthest_point_sample(xyz, npoint):
+ """
+ Input:
+ xyz: pointcloud data, [B, N, 3]
+ npoint: number of samples
+ Return:
+ centroids: sampled pointcloud index, [B, npoint]
+ """
+ device = xyz.device
+ B, N, C = xyz.shape
+ centroids = torch.zeros(B, npoint, dtype=torch.long).to(device)
+ distance = torch.ones(B, N).to(device) * 1e10
+ farthest = torch.randint(0, N, (B,), dtype=torch.long).to(device) * 0
+ batch_indices = torch.arange(B, dtype=torch.long).to(device)
+ for i in range(npoint):
+ centroids[:, i] = farthest
+ centroid = xyz[batch_indices, farthest, :].view(B, 1, 3)
+ dist = torch.sum((xyz - centroid) ** 2, -1)
+ mask = dist < distance
+ distance[mask] = dist[mask]
+ farthest = torch.max(distance, -1)[1]
+ return centroids
+
+def query_ball_point(radius, nsample, xyz, new_xyz):
+ """
+ Input:
+ radius: local region radius
+ nsample: max sample number in local region
+ xyz: all points, [B, N, 3]
+ new_xyz: query points, [B, S, 3]
+ Return:
+ group_idx: grouped points index, [B, S, nsample]
+ """
+ device = xyz.device
+ B, N, C = xyz.shape
+ _, S, _ = new_xyz.shape
+ group_idx = torch.arange(N, dtype=torch.long).to(device).view(1, 1, N).repeat([B, S, 1])
+ sqrdists = square_distance(new_xyz, xyz)
+ group_idx[sqrdists > radius ** 2] = N
+ group_idx = group_idx.sort(dim=-1)[0][:, :, :nsample]
+ group_first = group_idx[:, :, 0].view(B, S, 1).repeat([1, 1, nsample])
+ mask = group_idx == N
+ group_idx[mask] = group_first[mask]
+ return group_idx
+
+def sample_and_group(npoint, radius, nsample, xyz, points, returnfps=False):
+ """
+ Input:
+ npoint:
+ radius:
+ nsample:
+ xyz: input points position data, [B, N, 3]
+ points: input points data, [B, N, D]
+ Return:
+ new_xyz: sampled points position data, [B, npoint, nsample, 3]
+ new_points: sampled points data, [B, npoint, nsample, 3+D]
+ """
+ new_xyz = index_points(xyz, farthest_point_sample(xyz, npoint))
+ torch.cuda.empty_cache()
+
+ idx = query_ball_point(radius, nsample, xyz, new_xyz)
+ torch.cuda.empty_cache()
+
+ new_points = index_points(points, idx)
+ torch.cuda.empty_cache()
+
+ if returnfps:
+ return new_xyz, new_points, idx
+ else:
+ return new_xyz, new_points
+
+class Attention_block(nn.Module):
+ '''
+ Used in attention U-Net.
+ '''
+ def __init__(self,F_g,F_l,F_int):
+ super(Attention_block,self).__init__()
+ self.W_g = nn.Sequential(
+ nn.Conv1d(F_g, F_int, kernel_size=1,stride=1,padding=0,bias=True),
+ nn.BatchNorm1d(F_int)
+ )
+
+ self.W_x = nn.Sequential(
+ nn.Conv1d(F_l, F_int, kernel_size=1,stride=1,padding=0,bias=True),
+ nn.BatchNorm1d(F_int)
+ )
+
+ self.psi = nn.Sequential(
+ nn.Conv1d(F_int, 1, kernel_size=1,stride=1,padding=0,bias=True),
+ nn.BatchNorm1d(1),
+ nn.Sigmoid()
+ )
+
+ def forward(self,g,x):
+ g1 = self.W_g(g)
+ x1 = self.W_x(x)
+ psi = F.leaky_relu(g1+x1, negative_slope=0.2)
+ psi = self.psi(psi)
+
+ return psi, 1. - psi
+
+
+class LPFA(nn.Module):
+ def __init__(self, in_channel, out_channel, k, mlp_num=2, initial=False):
+ super(LPFA, self).__init__()
+ self.k = k
+ self.device = torch.device('cuda')
+ self.initial = initial
+
+ if not initial:
+ self.xyz2feature = nn.Sequential(
+ nn.Conv2d(9, in_channel, kernel_size=1, bias=False),
+ nn.BatchNorm2d(in_channel))
+
+ self.mlp = []
+ for _ in range(mlp_num):
+ self.mlp.append(nn.Sequential(nn.Conv2d(in_channel, out_channel, 1, bias=False),
+ nn.BatchNorm2d(out_channel),
+ nn.LeakyReLU(0.2)))
+ in_channel = out_channel
+ self.mlp = nn.Sequential(*self.mlp)
+
+ def forward(self, x, xyz, idx=None):
+ x = self.group_feature(x, xyz, idx)
+ x = self.mlp(x)
+
+ if self.initial:
+ x = x.max(dim=-1, keepdim=False)[0]
+ else:
+ x = x.mean(dim=-1, keepdim=False)
+
+ return x
+
+ def group_feature(self, x, xyz, idx):
+ batch_size, num_dims, num_points = x.size()
+
+ if idx is None:
+ idx = knn(xyz, k=self.k)[:,:,:self.k] # (batch_size, num_points, k)
+
+ idx_base = torch.arange(0, batch_size, device=self.device).view(-1, 1, 1) * num_points
+ idx = idx + idx_base
+ idx = idx.view(-1)
+
+ xyz = xyz.transpose(2, 1).contiguous() # bs, n, 3
+ point_feature = xyz.view(batch_size * num_points, -1)[idx, :]
+ point_feature = point_feature.view(batch_size, num_points, self.k, -1) # bs, n, k, 3
+ points = xyz.view(batch_size, num_points, 1, 3).expand(-1, -1, self.k, -1) # bs, n, k, 3
+
+ point_feature = torch.cat((points, point_feature, point_feature - points),
+ dim=3).permute(0, 3, 1, 2).contiguous()
+
+ if self.initial:
+ return point_feature
+
+ x = x.transpose(2, 1).contiguous() # bs, n, c
+ feature = x.view(batch_size * num_points, -1)[idx, :]
+ feature = feature.view(batch_size, num_points, self.k, num_dims) #bs, n, k, c
+ x = x.view(batch_size, num_points, 1, num_dims)
+ feature = feature - x
+
+ feature = feature.permute(0, 3, 1, 2).contiguous()
+ point_feature = self.xyz2feature(point_feature) #bs, c, n, k
+ feature = F.leaky_relu(feature + point_feature, 0.2)
+ return feature #bs, c, n, k
+
+
+class PointNetFeaturePropagation(nn.Module):
+ def __init__(self, in_channel, mlp, att=None):
+ super(PointNetFeaturePropagation, self).__init__()
+ self.mlp_convs = nn.ModuleList()
+ self.mlp_bns = nn.ModuleList()
+ last_channel = in_channel
+ self.att = None
+ if att is not None:
+ self.att = Attention_block(F_g=att[0],F_l=att[1],F_int=att[2])
+
+ for out_channel in mlp:
+ self.mlp_convs.append(nn.Conv1d(last_channel, out_channel, 1))
+ self.mlp_bns.append(nn.BatchNorm1d(out_channel))
+ last_channel = out_channel
+
+ def forward(self, xyz1, xyz2, points1, points2):
+ """
+ Input:
+ xyz1: input points position data, [B, C, N]
+ xyz2: sampled input points position data, [B, C, S], skipped xyz
+ points1: input points data, [B, D, N]
+ points2: input points data, [B, D, S], skipped features
+ Return:
+ new_points: upsampled points data, [B, D', N]
+ """
+ xyz1 = xyz1.permute(0, 2, 1)
+ xyz2 = xyz2.permute(0, 2, 1)
+
+ points2 = points2.permute(0, 2, 1)
+ B, N, C = xyz1.shape
+ _, S, _ = xyz2.shape
+
+ if S == 1:
+ interpolated_points = points2.repeat(1, N, 1)
+ else:
+ dists = square_distance(xyz1, xyz2)
+ dists, idx = dists.sort(dim=-1)
+ dists, idx = dists[:, :, :3], idx[:, :, :3] # [B, N, 3]
+
+ dist_recip = 1.0 / (dists + 1e-8)
+ norm = torch.sum(dist_recip, dim=2, keepdim=True)
+ weight = dist_recip / norm
+ interpolated_points = torch.sum(index_points(points2, idx) * weight.view(B, N, 3, 1), dim=2)
+
+ # skip attention
+ if self.att is not None:
+ psix, psig = self.att(interpolated_points.permute(0, 2, 1), points1)
+ points1 = points1 * psix
+
+ if points1 is not None:
+ points1 = points1.permute(0, 2, 1)
+ new_points = torch.cat([points1, interpolated_points], dim=-1)
+ else:
+ new_points = interpolated_points
+
+ new_points = new_points.permute(0, 2, 1)
+
+ for i, conv in enumerate(self.mlp_convs):
+ bn = self.mlp_bns[i]
+ new_points = F.leaky_relu(bn(conv(new_points)), 0.2)
+
+ return new_points
+
+
+class CIC(nn.Module):
+ def __init__(self, npoint, radius, k, in_channels, output_channels, bottleneck_ratio=2, mlp_num=2, curve_config=None):
+ super(CIC, self).__init__()
+ self.in_channels = in_channels
+ self.output_channels = output_channels
+ self.bottleneck_ratio = bottleneck_ratio
+ self.radius = radius
+ self.k = k
+ self.npoint = npoint
+
+ planes = in_channels // bottleneck_ratio
+
+ self.use_curve = curve_config is not None
+ if self.use_curve:
+ self.curveaggregation = CurveAggregation(planes)
+ self.curvegrouping = CurveGrouping(planes, k, curve_config[0], curve_config[1])
+
+ self.conv1 = nn.Sequential(
+ nn.Conv1d(in_channels,
+ planes,
+ kernel_size=1,
+ bias=False),
+ nn.BatchNorm1d(in_channels // bottleneck_ratio),
+ nn.LeakyReLU(negative_slope=0.2, inplace=True))
+
+ self.conv2 = nn.Sequential(
+ nn.Conv1d(planes, output_channels, kernel_size=1, bias=False),
+ nn.BatchNorm1d(output_channels))
+
+ if in_channels != output_channels:
+ self.shortcut = nn.Sequential(
+ nn.Conv1d(in_channels,
+ output_channels,
+ kernel_size=1,
+ bias=False),
+ nn.BatchNorm1d(output_channels))
+
+ self.relu = nn.LeakyReLU(negative_slope=0.2, inplace=True)
+
+ self.maxpool = MaskedMaxPool(npoint, radius, k)
+
+ self.lpfa = LPFA(planes, planes, k, mlp_num=mlp_num, initial=False)
+
+ def forward(self, xyz, x):
+
+ # max pool
+ if xyz.size(-1) != self.npoint:
+ xyz, x = self.maxpool(
+ xyz.transpose(1, 2).contiguous(), x)
+ xyz = xyz.transpose(1, 2)
+
+ shortcut = x
+ x = self.conv1(x) # bs, c', n
+
+ idx = knn(xyz, self.k)
+
+ if self.use_curve:
+ # curve grouping
+ curves = self.curvegrouping(x, xyz, idx[:,:,1:]) # avoid self-loop
+
+ # curve aggregation
+ x = self.curveaggregation(x, curves)
+
+ x = self.lpfa(x, xyz, idx=idx[:,:,:self.k]) #bs, c', n, k
+
+ x = self.conv2(x) # bs, c, n
+
+ if self.in_channels != self.output_channels:
+ shortcut = self.shortcut(shortcut)
+
+ x = self.relu(x + shortcut)
+
+ return xyz, x
+
+
+class CurveAggregation(nn.Module):
+ def __init__(self, in_channel):
+ super(CurveAggregation, self).__init__()
+ self.in_channel = in_channel
+ mid_feature = in_channel // 2
+ self.conva = nn.Conv1d(in_channel,
+ mid_feature,
+ kernel_size=1,
+ bias=False)
+ self.convb = nn.Conv1d(in_channel,
+ mid_feature,
+ kernel_size=1,
+ bias=False)
+ self.convc = nn.Conv1d(in_channel,
+ mid_feature,
+ kernel_size=1,
+ bias=False)
+ self.convn = nn.Conv1d(mid_feature,
+ mid_feature,
+ kernel_size=1,
+ bias=False)
+ self.convl = nn.Conv1d(mid_feature,
+ mid_feature,
+ kernel_size=1,
+ bias=False)
+ self.convd = nn.Sequential(
+ nn.Conv1d(mid_feature * 2,
+ in_channel,
+ kernel_size=1,
+ bias=False),
+ nn.BatchNorm1d(in_channel))
+ self.line_conv_att = nn.Conv2d(in_channel,
+ 1,
+ kernel_size=1,
+ bias=False)
+
+ def forward(self, x, curves):
+ curves_att = self.line_conv_att(curves) # bs, 1, c_n, c_l
+
+ curver_inter = torch.sum(curves * F.softmax(curves_att, dim=-1), dim=-1) #bs, c, c_n
+ curves_intra = torch.sum(curves * F.softmax(curves_att, dim=-2), dim=-2) #bs, c, c_l
+
+ curver_inter = self.conva(curver_inter) # bs, mid, n
+ curves_intra = self.convb(curves_intra) # bs, mid ,n
+
+ x_logits = self.convc(x).transpose(1, 2).contiguous()
+ x_inter = F.softmax(torch.bmm(x_logits, curver_inter), dim=-1) # bs, n, c_n
+ x_intra = F.softmax(torch.bmm(x_logits, curves_intra), dim=-1) # bs, l, c_l
+
+
+ curver_inter = self.convn(curver_inter).transpose(1, 2).contiguous()
+ curves_intra = self.convl(curves_intra).transpose(1, 2).contiguous()
+
+ x_inter = torch.bmm(x_inter, curver_inter)
+ x_intra = torch.bmm(x_intra, curves_intra)
+
+ curve_features = torch.cat((x_inter, x_intra),dim=-1).transpose(1, 2).contiguous()
+ x = x + self.convd(curve_features)
+
+ return F.leaky_relu(x, negative_slope=0.2)
+
+
+class CurveGrouping(nn.Module):
+ def __init__(self, in_channel, k, curve_num, curve_length):
+ super(CurveGrouping, self).__init__()
+ self.curve_num = curve_num
+ self.curve_length = curve_length
+ self.in_channel = in_channel
+ self.k = k
+
+ self.att = nn.Conv1d(in_channel, 1, kernel_size=1, bias=False)
+
+ self.walk = Walk(in_channel, k, curve_num, curve_length)
+
+ def forward(self, x, xyz, idx):
+ # starting point selection in self attention style
+ x_att = torch.sigmoid(self.att(x))
+ x = x * x_att
+
+ _, start_index = torch.topk(x_att,
+ self.curve_num,
+ dim=2,
+ sorted=False)
+ start_index = start_index.squeeze().unsqueeze(2)
+
+ curves = self.walk(xyz, x, idx, start_index) #bs, c, c_n, c_l
+
+ return curves
+
+
+class MaskedMaxPool(nn.Module):
+ def __init__(self, npoint, radius, k):
+ super(MaskedMaxPool, self).__init__()
+ self.npoint = npoint
+ self.radius = radius
+ self.k = k
+
+ def forward(self, xyz, features):
+ sub_xyz, neighborhood_features = sample_and_group(self.npoint, self.radius, self.k, xyz, features.transpose(1,2))
+
+ neighborhood_features = neighborhood_features.permute(0, 3, 1, 2).contiguous()
+ sub_features = F.max_pool2d(
+ neighborhood_features, kernel_size=[1, neighborhood_features.shape[3]]
+ ) # bs, c, n, 1
+ sub_features = torch.squeeze(sub_features, -1) # bs, c, n
+ return sub_xyz, sub_features
diff --git a/zoo/CurveNet/core/models/walk.py b/zoo/CurveNet/core/models/walk.py
new file mode 100644
index 0000000..32abfe1
--- /dev/null
+++ b/zoo/CurveNet/core/models/walk.py
@@ -0,0 +1,156 @@
+"""
+@Author: Tiange Xiang
+@Contact: txia7609@uni.sydney.edu.au
+@File: walk.py
+@Time: 2021/01/21 3:10 PM
+"""
+
+import numpy as np
+import torch
+import torch.nn as nn
+import torch.nn.functional as F
+
+
+def batched_index_select(input, dim, index):
+ views = [input.shape[0]] + \
+ [1 if i != dim else -1 for i in range(1, len(input.shape))]
+ expanse = list(input.shape)
+ expanse[0] = -1
+ expanse[dim] = -1
+ index = index.view(views).expand(expanse)
+ return torch.gather(input, dim, index)
+
+def gumbel_softmax(logits, dim, temperature=1):
+ """
+ ST-gumple-softmax w/o random gumbel samplings
+ input: [*, n_class]
+ return: flatten --> [*, n_class] an one-hot vector
+ """
+ y = F.softmax(logits / temperature, dim=dim)
+
+ shape = y.size()
+ _, ind = y.max(dim=-1)
+ y_hard = torch.zeros_like(y).view(-1, shape[-1])
+ y_hard.scatter_(1, ind.view(-1, 1), 1)
+ y_hard = y_hard.view(*shape)
+
+ y_hard = (y_hard - y).detach() + y
+ return y_hard
+
+class Walk(nn.Module):
+ '''
+ Walk in the cloud
+ '''
+ def __init__(self, in_channel, k, curve_num, curve_length):
+ super(Walk, self).__init__()
+ self.curve_num = curve_num
+ self.curve_length = curve_length
+ self.k = k
+
+ self.agent_mlp = nn.Sequential(
+ nn.Conv2d(in_channel * 2,
+ 1,
+ kernel_size=1,
+ bias=False), nn.BatchNorm2d(1))
+ self.momentum_mlp = nn.Sequential(
+ nn.Conv1d(in_channel * 2,
+ 2,
+ kernel_size=1,
+ bias=False), nn.BatchNorm1d(2))
+
+ def crossover_suppression(self, cur, neighbor, bn, n, k):
+ # cur: bs*n, 3
+ # neighbor: bs*n, 3, k
+ neighbor = neighbor.detach()
+ cur = cur.unsqueeze(-1).detach()
+ dot = torch.bmm(cur.transpose(1,2), neighbor) # bs*n, 1, k
+ norm1 = torch.norm(cur, dim=1, keepdim=True)
+ norm2 = torch.norm(neighbor, dim=1, keepdim=True)
+ divider = torch.clamp(norm1 * norm2, min=1e-8)
+ ans = torch.div(dot, divider).squeeze() # bs*n, k
+
+ # normalize to [0, 1]
+ ans = 1. + ans
+ ans = torch.clamp(ans, 0., 1.0)
+
+ return ans.detach()
+
+ def forward(self, xyz, x, adj, cur):
+ bn, c, tot_points = x.size()
+
+ # raw point coordinates
+ xyz = xyz.transpose(1,2).contiguous # bs, n, 3
+
+ # point features
+ x = x.transpose(1,2).contiguous() # bs, n, c
+
+ flatten_x = x.view(bn * tot_points, -1)
+ batch_offset = torch.arange(0, bn, device=torch.device('cuda')).detach() * tot_points
+
+ # indices of neighbors for the starting points
+ tmp_adj = (adj + batch_offset.view(-1,1,1)).view(adj.size(0)*adj.size(1),-1) #bs, n, k
+
+ # batch flattened indices for teh starting points
+ flatten_cur = (cur + batch_offset.view(-1,1,1)).view(-1)
+
+ curves = []
+
+ # one step at a time
+ for step in range(self.curve_length):
+
+ if step == 0:
+ # get starting point features using flattend indices
+ starting_points = flatten_x[flatten_cur, :].contiguous()
+ pre_feature = starting_points.view(bn, self.curve_num, -1, 1).transpose(1,2) # bs * n, c
+ else:
+ # dynamic momentum
+ cat_feature = torch.cat((cur_feature.squeeze(), pre_feature.squeeze()),dim=1)
+ att_feature = F.softmax(self.momentum_mlp(cat_feature),dim=1).view(bn, 1, self.curve_num, 2) # bs, 1, n, 2
+ cat_feature = torch.cat((cur_feature, pre_feature),dim=-1) # bs, c, n, 2
+
+ # update curve descriptor
+ pre_feature = torch.sum(cat_feature * att_feature, dim=-1, keepdim=True) # bs, c, n
+ pre_feature_cos = pre_feature.transpose(1,2).contiguous().view(bn * self.curve_num, -1)
+
+ pick_idx = tmp_adj[flatten_cur] # bs*n, k
+
+ # get the neighbors of current points
+ pick_values = flatten_x[pick_idx.view(-1),:]
+
+ # reshape to fit crossover suppresion below
+ pick_values_cos = pick_values.view(bn * self.curve_num, self.k, c)
+ pick_values = pick_values_cos.view(bn, self.curve_num, self.k, c)
+ pick_values_cos = pick_values_cos.transpose(1,2).contiguous()
+
+ pick_values = pick_values.permute(0,3,1,2) # bs, c, n, k
+
+ pre_feature_expand = pre_feature.expand_as(pick_values)
+
+ # concat current point features with curve descriptors
+ pre_feature_expand = torch.cat((pick_values, pre_feature_expand),dim=1)
+
+ # which node to pick next?
+ pre_feature_expand = self.agent_mlp(pre_feature_expand) # bs, 1, n, k
+
+ if step !=0:
+ # cross over supression
+ d = self.crossover_suppression(cur_feature_cos - pre_feature_cos,
+ pick_values_cos - cur_feature_cos.unsqueeze(-1),
+ bn, self.curve_num, self.k)
+ d = d.view(bn, self.curve_num, self.k).unsqueeze(1) # bs, 1, n, k
+ pre_feature_expand = torch.mul(pre_feature_expand, d)
+
+ pre_feature_expand = gumbel_softmax(pre_feature_expand, -1) #bs, 1, n, k
+
+ cur_feature = torch.sum(pick_values * pre_feature_expand, dim=-1, keepdim=True) # bs, c, n, 1
+
+ cur_feature_cos = cur_feature.transpose(1,2).contiguous().view(bn * self.curve_num, c)
+
+ cur = torch.argmax(pre_feature_expand, dim=-1).view(-1, 1) # bs * n, 1
+
+ flatten_cur = batched_index_select(pick_idx, 1, cur).squeeze() # bs * n
+
+ # collect curve progress
+ curves.append(cur_feature)
+
+ return torch.cat(curves,dim=-1)
diff --git a/zoo/CurveNet/core/start_cls.sh b/zoo/CurveNet/core/start_cls.sh
new file mode 100644
index 0000000..5e9bd95
--- /dev/null
+++ b/zoo/CurveNet/core/start_cls.sh
@@ -0,0 +1 @@
+python3 main_cls.py --exp_name=curvenet_cls_1
diff --git a/zoo/CurveNet/core/start_normal.sh b/zoo/CurveNet/core/start_normal.sh
new file mode 100644
index 0000000..a129541
--- /dev/null
+++ b/zoo/CurveNet/core/start_normal.sh
@@ -0,0 +1 @@
+python3 main_normal.py --exp_name=curvenet_normal_1
diff --git a/zoo/CurveNet/core/start_part.sh b/zoo/CurveNet/core/start_part.sh
new file mode 100644
index 0000000..4d09f5c
--- /dev/null
+++ b/zoo/CurveNet/core/start_part.sh
@@ -0,0 +1 @@
+python3 main_partseg.py --exp_name=curveunet_seg_1
diff --git a/zoo/CurveNet/core/test_cls.sh b/zoo/CurveNet/core/test_cls.sh
new file mode 100644
index 0000000..5bb5d44
--- /dev/null
+++ b/zoo/CurveNet/core/test_cls.sh
@@ -0,0 +1 @@
+python3 main_cls.py --eval=True --model_path=../pretrained/cls/models/model.t7
diff --git a/zoo/CurveNet/core/test_normal.sh b/zoo/CurveNet/core/test_normal.sh
new file mode 100755
index 0000000..7fdc125
--- /dev/null
+++ b/zoo/CurveNet/core/test_normal.sh
@@ -0,0 +1 @@
+python3 main_normal.py --eval=True --model_path=../pretrained/normal/models/model.t7
diff --git a/zoo/CurveNet/core/test_part.sh b/zoo/CurveNet/core/test_part.sh
new file mode 100644
index 0000000..53fb8b1
--- /dev/null
+++ b/zoo/CurveNet/core/test_part.sh
@@ -0,0 +1 @@
+python3 main_partseg.py --eval=True --model_path=../pretrained/seg/models/model.t7
diff --git a/zoo/CurveNet/core/util.py b/zoo/CurveNet/core/util.py
new file mode 100644
index 0000000..f31efb9
--- /dev/null
+++ b/zoo/CurveNet/core/util.py
@@ -0,0 +1,44 @@
+"""
+@Author: Yue Wang
+@Contact: yuewangx@mit.edu
+@File: util
+@Time: 4/5/19 3:47 PM
+"""
+
+
+import numpy as np
+import torch
+import torch.nn.functional as F
+
+
+def cal_loss(pred, gold, smoothing=True):
+ ''' Calculate cross entropy loss, apply label smoothing if needed. '''
+
+ gold = gold.contiguous().view(-1)
+
+ if smoothing:
+ eps = 0.2
+ n_class = pred.size(1)
+
+ one_hot = torch.zeros_like(pred).scatter(1, gold.view(-1, 1), 1)
+ one_hot = one_hot * (1 - eps) + (1 - one_hot) * eps / (n_class - 1)
+ log_prb = F.log_softmax(pred, dim=1)
+
+ loss = -(one_hot * log_prb).sum(dim=1).mean()
+ else:
+ loss = F.cross_entropy(pred, gold, reduction='mean')
+
+ return loss
+
+
+class IOStream():
+ def __init__(self, path):
+ self.f = open(path, 'a')
+
+ def cprint(self, text):
+ print(text)
+ self.f.write(text+'\n')
+ self.f.flush()
+
+ def close(self):
+ self.f.close()
diff --git a/zoo/CurveNet/poster3.png b/zoo/CurveNet/poster3.png
new file mode 100644
index 0000000..7f6d9ae
Binary files /dev/null and b/zoo/CurveNet/poster3.png differ
diff --git a/zoo/CurveNet/teaser.png b/zoo/CurveNet/teaser.png
new file mode 100644
index 0000000..0be71d2
Binary files /dev/null and b/zoo/CurveNet/teaser.png differ
diff --git a/zoo/GDANet/.gitignore b/zoo/GDANet/.gitignore
new file mode 100644
index 0000000..3377dd9
--- /dev/null
+++ b/zoo/GDANet/.gitignore
@@ -0,0 +1,137 @@
+# Byte-compiled / optimized / DLL files
+__pycache__/
+*.py[cod]
+*$py.class
+
+# C extensions
+*.so
+
+# Distribution / packaging
+.Python
+build/
+develop-eggs/
+dist/
+downloads/
+eggs/
+.eggs/
+lib64/
+parts/
+sdist/
+var/
+wheels/
+pip-wheel-metadata/
+share/python-wheels/
+*.egg-info/
+.installed.cfg
+*.egg
+MANIFEST
+
+# PyInstaller
+# Usually these files are written by a python script from a template
+# before PyInstaller builds the exe, so as to inject date/other infos into it.
+*.manifest
+*.spec
+
+# Installer logs
+pip-log.txt
+pip-delete-this-directory.txt
+
+# Unit test / coverage reports
+htmlcov/
+.tox/
+.nox/
+.coverage
+.coverage.*
+.cache
+nosetests.xml
+coverage.xml
+*.cover
+*.py,cover
+.hypothesis/
+.pytest_cache/
+
+# Translations
+*.mo
+*.pot
+
+# Django stuff:
+*.log
+local_settings.py
+db.sqlite3
+db.sqlite3-journal
+
+# Flask stuff:
+instance/
+.webassets-cache
+
+# Scrapy stuff:
+.scrapy
+
+# Sphinx documentation
+docs/_build/
+
+# PyBuilder
+target/
+
+# Jupyter Notebook
+.ipynb_checkpoints
+
+# IPython
+profile_default/
+ipython_config.py
+
+# pyenv
+.python-version
+
+# pipenv
+# According to pypa/pipenv#598, it is recommended to include Pipfile.lock in version control.
+# However, in case of collaboration, if having platform-specific dependencies or dependencies
+# having no cross-platform support, pipenv may install dependencies that don't work, or not
+# install all needed dependencies.
+#Pipfile.lock
+
+# PEP 582; used by e.g. github.com/David-OConnor/pyflow
+__pypackages__/
+
+# Celery stuff
+celerybeat-schedule
+celerybeat.pid
+
+# SageMath parsed files
+*.sage.py
+
+# Environments
+.env
+.venv
+env/
+venv/
+ENV/
+env.bak/
+venv.bak/
+
+# Spyder project settings
+.spyderproject
+.spyproject
+
+# Rope project settings
+.ropeproject
+
+# mkdocs documentation
+/site
+
+# mypy
+.mypy_cache/
+.dmypy.json
+dmypy.json
+
+# Pyre type checker
+.pyre/
+
+.idea/
+exp/
+kernels/
+lib/**/*.o
+lib/**/*.ninja*
+dataset/
+*.DS_Store
+.vscode/
diff --git a/zoo/GDANet/PointWOLF.py b/zoo/GDANet/PointWOLF.py
new file mode 100644
index 0000000..cc1c415
--- /dev/null
+++ b/zoo/GDANet/PointWOLF.py
@@ -0,0 +1,172 @@
+"""
+@origin : PointWOLF.py by {Sanghyeok Lee, Sihyeon Kim}
+@Contact: {cat0626, sh_bs15}@korea.ac.kr
+@Time: 2021.09.30
+"""
+
+import torch
+import torch.nn as nn
+import numpy as np
+
+class PointWOLF(object):
+ def __init__(self, args):
+ print("="*10 + "Using PointWolf" + "="*10)
+ self.num_anchor = args.w_num_anchor
+ self.sample_type = args.w_sample_type
+ self.sigma = args.w_sigma
+
+ self.R_range = (-abs(args.w_R_range), abs(args.w_R_range))
+ self.S_range = (1., args.w_S_range)
+ self.T_range = (-abs(args.w_T_range), abs(args.w_T_range))
+
+
+ def __call__(self, pos):
+ """
+ input :
+ pos([N,3])
+
+ output :
+ pos([N,3]) : original pointcloud
+ pos_new([N,3]) : Pointcloud augmneted by PointWOLF
+ """
+ M=self.num_anchor #(Mx3)
+ N, _=pos.shape #(N)
+
+ if self.sample_type == 'random':
+ idx = np.random.choice(N,M)#(M)
+ elif self.sample_type == 'fps':
+ idx = self.fps(pos, M) #(M)
+
+ pos_anchor = pos[idx] #(M,3), anchor point
+
+ pos_repeat = np.expand_dims(pos,0).repeat(M, axis=0)#(M,N,3)
+ pos_normalize = np.zeros_like(pos_repeat, dtype=pos.dtype) #(M,N,3)
+
+ #Move to canonical space
+ pos_normalize = pos_repeat - pos_anchor.reshape(M,-1,3)
+
+ #Local transformation at anchor point
+ pos_transformed = self.local_transformaton(pos_normalize) #(M,N,3)
+
+ #Move to origin space
+ pos_transformed = pos_transformed + pos_anchor.reshape(M,-1,3) #(M,N,3)
+
+ pos_new = self.kernel_regression(pos, pos_anchor, pos_transformed)
+ pos_new = self.normalize(pos_new)
+
+ return pos.astype('float32'), pos_new.astype('float32')
+
+
+ def kernel_regression(self, pos, pos_anchor, pos_transformed):
+ """
+ input :
+ pos([N,3])
+ pos_anchor([M,3])
+ pos_transformed([M,N,3])
+
+ output :
+ pos_new([N,3]) : Pointcloud after weighted local transformation
+ """
+ M, N, _ = pos_transformed.shape
+
+ #Distance between anchor points & entire points
+ sub = np.expand_dims(pos_anchor,1).repeat(N, axis=1) - np.expand_dims(pos,0).repeat(M, axis=0) #(M,N,3), d
+
+ project_axis = self.get_random_axis(1)
+
+ projection = np.expand_dims(project_axis, axis=1)*np.eye(3)#(1,3,3)
+
+ #Project distance
+ sub = sub @ projection # (M,N,3)
+ sub = np.sqrt(((sub) ** 2).sum(2)) #(M,N)
+
+ #Kernel regression
+ weight = np.exp(-0.5 * (sub ** 2) / (self.sigma ** 2)) #(M,N)
+ pos_new = (np.expand_dims(weight,2).repeat(3, axis=-1) * pos_transformed).sum(0) #(N,3)
+ pos_new = (pos_new / weight.sum(0, keepdims=True).T) # normalize by weight
+ return pos_new
+
+
+ def fps(self, pos, npoint):
+ """
+ input :
+ pos([N,3])
+ npoint(int)
+
+ output :
+ centroids([npoints]) : index list for fps
+ """
+ N, _ = pos.shape
+ centroids = np.zeros(npoint, dtype=np.int_) #(M)
+ distance = np.ones(N, dtype=np.float64) * 1e10 #(N)
+ farthest = np.random.randint(0, N, (1,), dtype=np.int_)
+ for i in range(npoint):
+ centroids[i] = farthest
+ centroid = pos[farthest, :]
+ dist = ((pos - centroid)**2).sum(-1)
+ mask = dist < distance
+ distance[mask] = dist[mask]
+ farthest = distance.argmax()
+ return centroids
+
+ def local_transformaton(self, pos_normalize):
+ """
+ input :
+ pos([N,3])
+ pos_normalize([M,N,3])
+
+ output :
+ pos_normalize([M,N,3]) : Pointclouds after local transformation centered at M anchor points.
+ """
+ M,N,_ = pos_normalize.shape
+ transformation_dropout = np.random.binomial(1, 0.5, (M,3)) #(M,3)
+ transformation_axis =self.get_random_axis(M) #(M,3)
+
+ degree = np.pi * np.random.uniform(*self.R_range, size=(M,3)) / 180.0 * transformation_dropout[:,0:1] #(M,3), sampling from (-R_range, R_range)
+
+ scale = np.random.uniform(*self.S_range, size=(M,3)) * transformation_dropout[:,1:2] #(M,3), sampling from (1, S_range)
+ scale = scale*transformation_axis
+ scale = scale + 1*(scale==0) #Scaling factor must be larger than 1
+
+ trl = np.random.uniform(*self.T_range, size=(M,3)) * transformation_dropout[:,2:3] #(M,3), sampling from (1, T_range)
+ trl = trl*transformation_axis
+
+ #Scaling Matrix
+ S = np.expand_dims(scale, axis=1)*np.eye(3) # scailing factor to diagonal matrix (M,3) -> (M,3,3)
+ #Rotation Matrix
+ sin = np.sin(degree)
+ cos = np.cos(degree)
+ sx, sy, sz = sin[:,0], sin[:,1], sin[:,2]
+ cx, cy, cz = cos[:,0], cos[:,1], cos[:,2]
+ R = np.stack([cz*cy, cz*sy*sx - sz*cx, cz*sy*cx + sz*sx,
+ sz*cy, sz*sy*sx + cz*cy, sz*sy*cx - cz*sx,
+ -sy, cy*sx, cy*cx], axis=1).reshape(M,3,3)
+
+ pos_normalize = pos_normalize@R@S + trl.reshape(M,1,3)
+ return pos_normalize
+
+ def get_random_axis(self, n_axis):
+ """
+ input :
+ n_axis(int)
+
+ output :
+ axis([n_axis,3]) : projection axis
+ """
+ axis = np.random.randint(1,8, (n_axis)) # 1(001):z, 2(010):y, 3(011):yz, 4(100):x, 5(101):xz, 6(110):xy, 7(111):xyz
+ m = 3
+ axis = (((axis[:,None] & (1 << np.arange(m)))) > 0).astype(int)
+ return axis
+
+ def normalize(self, pos):
+ """
+ input :
+ pos([N,3])
+
+ output :
+ pos([N,3]) : normalized Pointcloud
+ """
+ pos = pos - pos.mean(axis=-2, keepdims=True)
+ scale = (1 / np.sqrt((pos ** 2).sum(1)).max()) * 0.999999
+ pos = scale * pos
+ return pos
diff --git a/zoo/GDANet/README.md b/zoo/GDANet/README.md
new file mode 100644
index 0000000..154394a
--- /dev/null
+++ b/zoo/GDANet/README.md
@@ -0,0 +1,92 @@
+# Learning Geometry-Disentangled Representation for Complementary Understanding of 3D Object Point Cloud.
+This repository is built for the paper:
+
+__Learning Geometry-Disentangled Representation for Complementary Understanding of 3D Object Point Cloud (_AAAI2021_)__ [[arXiv](https://arxiv.org/abs/2012.10921)]
+
+by [Mutian Xu*](https://mutianxu.github.io/), [Junhao Zhang*](https://junhaozhang98.github.io/), Zhipeng Zhou, Mingye Xu, Xiaojuan Qi and Yu Qiao.
+
+
+## Overview
+Geometry-Disentangled Attention Network for 3D object point cloud classification and segmentation (GDANet):
+
+
+## Citation
+If you find the code or trained models useful, please consider citing:
+
+ @misc{xu2021learning,
+ title={Learning Geometry-Disentangled Representation for Complementary Understanding of 3D Object Point Cloud},
+ author={Mutian Xu and Junhao Zhang and Zhipeng Zhou and Mingye Xu and Xiaojuan Qi and Yu Qiao},
+ year={2021},
+ eprint={2012.10921},
+ archivePrefix={arXiv},
+ primaryClass={cs.CV}
+
+
+## Installation
+
+
+### Requirements
+* Linux (tested on Ubuntu 14.04/16.04)
+* Python 3.5+
+* PyTorch 1.0+
+
+### Dataset
+* Create the folder to symlink the data later:
+
+ `mkdir -p data`
+
+* __Object Classification__:
+
+ Download and unzip [ModelNet40](https://shapenet.cs.stanford.edu/media/modelnet40_ply_hdf5_2048.zip) (415M), then symlink the path to it as follows (you can alternatively modify the path [here](https://github.com/mutianxu/GDANet/blob/main/util/data_util.py#L12)) :
+
+ `ln -s /path to modelnet40/modelnet40_ply_hdf5_2048 data`
+
+* __Shape Part Segmentation__:
+
+ Download and unzip [ShapeNet Part](https://shapenet.cs.stanford.edu/media/shapenetcore_partanno_segmentation_benchmark_v0_normal.zip) (674M), then symlink the path to it as follows (you can alternatively modify the path [here](https://github.com/mutianxu/GDANet/blob/main/util/data_util.py#L70)) :
+
+ `ln -s /path to shapenet part/shapenetcore_partanno_segmentation_benchmark_v0_normal data`
+
+## Usage
+
+### Object Classification on ModelNet40
+* Train:
+
+ `python main_cls.py --beta 1.0 --epochs 500`
+
+* Test:
+ * Run the voting evaluation script, after this =voting you will get an accuracy of 93.8% if all things go right:
+
+ `python voting_eval_modelnet.py --model_path 'pretrained/GDANet_ModelNet40_93.4.t7'`
+
+ * You can also directly evaluate our pretrained model without voting to get an accuracy of 93.4%:
+
+ `python main.py --eval True --model_path 'pretrained/GDANet_ModelNet40_93.4.t7'`
+
+### Shape Part Segmentation on ShapeNet Part
+* Train:
+ * Training from scratch:
+
+ `python main_ptseg.py`
+
+ * If you want resume training from checkpoints, specify `resume` in the args:
+
+ `python main_ptseg.py --resume True`
+
+* Test:
+
+ You can choose to test the model with the best instance mIoU, class mIoU or accuracy, by specifying `model_type` in the args:
+
+ * `python main_ptseg.py --model_type 'ins_iou'` (best instance mIoU, default)
+
+ * `python main_ptseg.py --model_type 'cls_iou'` (best class mIoU)
+
+ * `python main_ptseg.py --model_type 'acc'` (best accuracy)
+
+
+## Other information
+
+Please contact Mutian Xu (mino1018@outlook.com) or Junhao Zhang (junhaozhang98@gmail.com) for further discussion.
+
+## Acknowledgement
+This code is is partially borrowed from [DGCNN](https://github.com/WangYueFt/dgcnn) and [PointNet++](https://github.com/charlesq34/pointnet2).
\ No newline at end of file
diff --git a/zoo/GDANet/imgs/GDANet.jpg b/zoo/GDANet/imgs/GDANet.jpg
new file mode 100644
index 0000000..a4d9ec3
Binary files /dev/null and b/zoo/GDANet/imgs/GDANet.jpg differ
diff --git a/zoo/GDANet/main_cls.py b/zoo/GDANet/main_cls.py
new file mode 100755
index 0000000..142cac9
--- /dev/null
+++ b/zoo/GDANet/main_cls.py
@@ -0,0 +1,336 @@
+from __future__ import print_function
+import os
+import argparse
+import torch
+import torch.nn as nn
+import torch.optim as optim
+from torch.optim.lr_scheduler import CosineAnnealingLR
+from util.data_util import ModelNet40
+from model.GDANet_cls import GDANET
+import numpy as np
+from torch.utils.data import DataLoader
+from util.util import cal_loss, IOStream
+import sklearn.metrics as metrics
+from datetime import datetime
+import provider
+import rsmix_provider
+from modelnetc_utils import eval_corrupt_wrapper, ModelNetC
+
+
+# weight initialization:
+def weight_init(m):
+ if isinstance(m, torch.nn.Linear):
+ torch.nn.init.xavier_normal_(m.weight)
+ torch.nn.init.constant_(m.bias, 0)
+ elif isinstance(m, torch.nn.Conv2d):
+ torch.nn.init.xavier_normal_(m.weight)
+ torch.nn.init.constant_(m.bias, 0)
+ elif isinstance(m, torch.nn.Conv1d):
+ torch.nn.init.xavier_normal_(m.weight)
+ torch.nn.init.constant_(m.bias, 0)
+ elif isinstance(m, torch.nn.BatchNorm2d):
+ torch.nn.init.constant_(m.weight, 1)
+ torch.nn.init.constant_(m.bias, 0)
+ elif isinstance(m, torch.nn.BatchNorm1d):
+ torch.nn.init.constant_(m.weight, 1)
+ torch.nn.init.constant_(m.bias, 0)
+
+
+def _init_():
+ if not os.path.exists('checkpoints'):
+ os.makedirs('checkpoints')
+ if not os.path.exists('checkpoints/' + args.exp_name):
+ os.makedirs('checkpoints/' + args.exp_name)
+
+ # backup the running files:
+ if not args.eval:
+ os.system('cp main_cls.py checkpoints' + '/' + args.exp_name + '/' + 'main.py.backup')
+ os.system('cp model/GDANet_cls.py checkpoints' + '/' + args.exp_name + '/' + 'GDANet_cls.py.backup')
+ os.system('cp util.GDANet_util.py checkpoints' + '/' + args.exp_name + '/' + 'GDANet_util.py.backup')
+ os.system('cp util.data_util.py checkpoints' + '/' + args.exp_name + '/' + 'data_util.py.backup')
+
+
+def train(args, io):
+ train_loader = DataLoader(ModelNet40(partition='train', num_points=args.num_points, args=args if args.pw else None),
+ num_workers=8, batch_size=args.batch_size, shuffle=True, drop_last=True)
+ test_loader = DataLoader(ModelNet40(partition='test', num_points=args.num_points), num_workers=8,
+ batch_size=args.test_batch_size, shuffle=True, drop_last=False)
+
+ device = torch.device("cuda" if args.cuda else "cpu")
+
+ model = GDANET().to(device)
+ print(str(model))
+
+ model.apply(weight_init)
+ model = nn.DataParallel(model)
+ print("Let's use", torch.cuda.device_count(), "GPUs!")
+
+ if args.use_sgd:
+ print("Use SGD")
+ opt = optim.SGD(model.parameters(), lr=args.lr * 100, momentum=args.momentum, weight_decay=1e-4)
+ scheduler = CosineAnnealingLR(opt, args.epochs, eta_min=args.lr)
+ else:
+ print("Use Adam")
+ opt = optim.Adam(model.parameters(), lr=args.lr, weight_decay=1e-4)
+ scheduler = CosineAnnealingLR(opt, args.epochs, eta_min=args.lr / 100)
+
+ criterion = cal_loss
+
+ best_test_acc = 0
+
+ for epoch in range(args.epochs):
+ scheduler.step()
+ ####################
+ # Train
+ ####################
+ train_loss = 0.0
+ count = 0.0
+ model.train()
+ train_pred = []
+ train_true = []
+ for data, label in train_loader:
+ '''
+ implement augmentation
+ '''
+ rsmix = False
+ # for new augmentation code, remove squeeze because it will be applied after augmentation.
+ # default from baseline model, scale, shift, shuffle was default augmentation
+ if args.rot or args.rdscale or args.shift or args.jitter or args.shuffle or args.rddrop or (
+ args.beta is not 0.0):
+ data = data.cpu().numpy()
+ if args.rot:
+ data = provider.rotate_point_cloud(data)
+ data = provider.rotate_perturbation_point_cloud(data)
+ if args.rdscale:
+ tmp_data = provider.random_scale_point_cloud(data[:, :, 0:3])
+ data[:, :, 0:3] = tmp_data
+ if args.shift:
+ tmp_data = provider.shift_point_cloud(data[:, :, 0:3])
+ data[:, :, 0:3] = tmp_data
+ if args.jitter:
+ tmp_data = provider.jitter_point_cloud(data[:, :, 0:3])
+ data[:, :, 0:3] = tmp_data
+ if args.rddrop:
+ data = provider.random_point_dropout(data)
+ if args.shuffle:
+ data = provider.shuffle_points(data)
+ r = np.random.rand(1)
+ if args.beta > 0 and r < args.rsmix_prob:
+ rsmix = True
+ data, lam, label, label_b = rsmix_provider.rsmix(data, label, beta=args.beta, n_sample=args.nsample,
+ KNN=args.knn)
+ if args.rot or args.rdscale or args.shift or args.jitter or args.shuffle or args.rddrop or (
+ args.beta is not 0.0):
+ data = torch.FloatTensor(data)
+ if rsmix:
+ lam = torch.FloatTensor(lam)
+ lam, label_b = lam.to(device), label_b.to(device).squeeze()
+ data, label = data.to(device), label.to(device).squeeze()
+
+ if rsmix:
+ data = data.permute(0, 2, 1)
+ batch_size = data.size()[0]
+ opt.zero_grad()
+ logits = model(data)
+
+ loss = 0
+ for i in range(batch_size):
+ loss_tmp = criterion(logits[i].unsqueeze(0), label[i].unsqueeze(0).long()) * (1 - lam[i]) \
+ + criterion(logits[i].unsqueeze(0), label_b[i].unsqueeze(0).long()) * lam[i]
+ loss += loss_tmp
+ loss = loss / batch_size
+
+ else:
+ data = data.permute(0, 2, 1)
+ batch_size = data.size()[0]
+ opt.zero_grad()
+ logits = model(data)
+ loss = criterion(logits, label)
+
+ loss.backward()
+ opt.step()
+ preds = logits.max(dim=1)[1]
+ count += batch_size
+ train_loss += loss.item() * batch_size
+ train_true.append(label.cpu().numpy())
+ train_pred.append(preds.detach().cpu().numpy())
+ train_true = np.concatenate(train_true)
+ train_pred = np.concatenate(train_pred)
+ outstr = 'Train %d, loss: %.6f, train acc: %.6f, train avg acc: %.6f' % (epoch,
+ train_loss * 1.0 / count,
+ metrics.accuracy_score(
+ train_true, train_pred),
+ metrics.balanced_accuracy_score(
+ train_true, train_pred))
+ io.cprint(outstr)
+
+ ####################
+ # Test
+ ####################
+ test_loss = 0.0
+ count = 0.0
+ model.eval()
+ test_pred = []
+ test_true = []
+ for data, label in test_loader:
+ data, label = data.to(device), label.to(device).squeeze()
+ data = data.permute(0, 2, 1)
+ batch_size = data.size()[0]
+ logits = model(data)
+ loss = criterion(logits, label)
+ preds = logits.max(dim=1)[1]
+ count += batch_size
+ test_loss += loss.item() * batch_size
+ test_true.append(label.cpu().numpy())
+ test_pred.append(preds.detach().cpu().numpy())
+ test_true = np.concatenate(test_true)
+ test_pred = np.concatenate(test_pred)
+ test_acc = metrics.accuracy_score(test_true, test_pred)
+ avg_per_class_acc = metrics.balanced_accuracy_score(test_true, test_pred)
+ outstr = 'Test %d, loss: %.6f, test acc: %.6f, test avg acc: %.6f' % (epoch,
+ test_loss * 1.0 / count,
+ test_acc,
+ avg_per_class_acc)
+ io.cprint(outstr)
+ if test_acc >= best_test_acc:
+ best_test_acc = test_acc
+ io.cprint('Max Acc:%.6f' % best_test_acc)
+ torch.save(model.state_dict(), 'checkpoints/%s/best_model.t7' % args.exp_name)
+
+
+def test(args, io):
+ test_loader = DataLoader(ModelNet40(partition='test', num_points=args.num_points),
+ batch_size=args.test_batch_size, shuffle=True, drop_last=False)
+
+ device = torch.device("cuda" if args.cuda else "cpu")
+
+ model = GDANET().to(device)
+ model = nn.DataParallel(model)
+ model.load_state_dict(torch.load(args.model_path))
+ model = model.eval()
+ test_acc = 0.0
+ count = 0.0
+ test_true = []
+ test_pred = []
+ for data, label in test_loader:
+ data, label = data.to(device), label.to(device).squeeze()
+ data = data.permute(0, 2, 1)
+ batch_size = data.size()[0]
+ logits = model(data)
+ preds = logits.max(dim=1)[1]
+ test_true.append(label.cpu().numpy())
+ test_pred.append(preds.detach().cpu().numpy())
+ test_true = np.concatenate(test_true)
+ test_pred = np.concatenate(test_pred)
+ test_acc = metrics.accuracy_score(test_true, test_pred)
+ avg_per_class_acc = metrics.balanced_accuracy_score(test_true, test_pred)
+ outstr = 'Test :: test acc: %.6f, test avg acc: %.6f' % (test_acc, avg_per_class_acc)
+ io.cprint(outstr)
+
+
+if __name__ == "__main__":
+ # Training settings
+ parser = argparse.ArgumentParser(description='3D Object Classification')
+ parser.add_argument('--exp_name', type=str, default='GDANet', metavar='N',
+ help='Name of the experiment')
+ parser.add_argument('--batch_size', type=int, default=64, metavar='batch_size',
+ help='Size of batch)')
+ parser.add_argument('--test_batch_size', type=int, default=32, metavar='batch_size',
+ help='Size of batch)')
+ parser.add_argument('--epochs', type=int, default=350, metavar='N',
+ help='number of episode to train')
+ parser.add_argument('--use_sgd', type=bool, default=True,
+ help='Use SGD')
+ parser.add_argument('--lr', type=float, default=0.001, metavar='LR',
+ help='learning rate (default: 0.001, 0.1 if using sgd)')
+ parser.add_argument('--momentum', type=float, default=0.9, metavar='M',
+ help='SGD momentum (default: 0.9)')
+ parser.add_argument('--no_cuda', type=bool, default=False,
+ help='enables CUDA training')
+ parser.add_argument('--seed', type=int, default=1, metavar='S',
+ help='random seed (default: 1)')
+ parser.add_argument('--eval', type=bool, default=False,
+ help='evaluate the model')
+ parser.add_argument('--eval_corrupt', type=bool, default=False,
+ help='evaluate the model under corruption')
+ parser.add_argument('--num_points', type=int, default=1024,
+ help='num of points to use')
+ parser.add_argument('--model_path', type=str, default='', metavar='N',
+ help='Pretrained model path')
+
+ # added arguments
+ parser.add_argument('--rdscale', action='store_true', help='random scaling data augmentation')
+ parser.add_argument('--shift', action='store_true', help='random shift data augmentation')
+ parser.add_argument('--shuffle', action='store_true', help='random shuffle data augmentation')
+ parser.add_argument('--rot', action='store_true', help='random rotation augmentation')
+ parser.add_argument('--jitter', action='store_true', help='jitter augmentation')
+ parser.add_argument('--rddrop', action='store_true', help='random point drop data augmentation')
+ parser.add_argument('--rsmix_prob', type=float, default=0.5, help='rsmix probability')
+ parser.add_argument('--beta', type=float, default=0.0, help='scalar value for beta function')
+ parser.add_argument('--nsample', type=float, default=512,
+ help='default max sample number of the erased or added points in rsmix')
+ parser.add_argument('--modelnet10', action='store_true', help='use modelnet10')
+ parser.add_argument('--normal', action='store_true', help='use normal')
+ parser.add_argument('--knn', action='store_true', help='use knn instead ball-query function')
+ parser.add_argument('--data_path', type=str, default='./data/modelnet40_normal_resampled', help='dataset path')
+
+ # pointwolf
+ parser.add_argument('--pw', action='store_true', help='use PointWOLF')
+ parser.add_argument('--w_num_anchor', type=int, default=4, help='Num of anchor point')
+ parser.add_argument('--w_sample_type', type=str, default='fps',
+ help='Sampling method for anchor point, option : (fps, random)')
+ parser.add_argument('--w_sigma', type=float, default=0.5, help='Kernel bandwidth')
+
+ parser.add_argument('--w_R_range', type=float, default=10, help='Maximum rotation range of local transformation')
+ parser.add_argument('--w_S_range', type=float, default=3, help='Maximum scailing range of local transformation')
+ parser.add_argument('--w_T_range', type=float, default=0.25,
+ help='Maximum translation range of local transformation')
+
+ args = parser.parse_args()
+
+ _init_()
+
+ if not args.eval:
+ io = IOStream('checkpoints/' + args.exp_name + '/%s_train.log' % (args.exp_name))
+ else:
+ io = IOStream('checkpoints/' + args.exp_name + '/%s_test.log' % (args.exp_name))
+ io.cprint(str(args))
+
+ args.cuda = not args.no_cuda and torch.cuda.is_available()
+ torch.manual_seed(args.seed)
+ if args.cuda:
+ io.cprint(
+ 'Using GPU : ' + str(torch.cuda.current_device()) + ' from ' + str(torch.cuda.device_count()) + ' devices')
+ torch.cuda.manual_seed(args.seed)
+ else:
+ io.cprint('Using CPU')
+
+ if not args.eval and not args.eval_corrupt:
+ train(args, io)
+ elif args.eval:
+ test(args, io)
+ elif args.eval_corrupt:
+ device = torch.device("cuda" if args.cuda else "cpu")
+ model = GDANET().to(device)
+ model = nn.DataParallel(model)
+ model.load_state_dict(torch.load(args.model_path))
+ model = model.eval()
+
+ def test_corrupt(args, split, model):
+ test_loader = DataLoader(ModelNetC(split=split),
+ batch_size=args.test_batch_size, shuffle=True, drop_last=False)
+ test_true = []
+ test_pred = []
+ for data, label in test_loader:
+ data, label = data.to(device), label.to(device).squeeze()
+ data = data.permute(0, 2, 1)
+ logits = model(data)
+ preds = logits.max(dim=1)[1]
+ test_true.append(label.cpu().numpy())
+ test_pred.append(preds.detach().cpu().numpy())
+ test_true = np.concatenate(test_true)
+ test_pred = np.concatenate(test_pred)
+ test_acc = metrics.accuracy_score(test_true, test_pred)
+ avg_per_class_acc = metrics.balanced_accuracy_score(test_true, test_pred)
+ return {'acc': test_acc, 'avg_per_class_acc': avg_per_class_acc}
+ eval_corrupt_wrapper(model, test_corrupt, {'args': args})
diff --git a/zoo/GDANet/main_ptseg.py b/zoo/GDANet/main_ptseg.py
new file mode 100644
index 0000000..fcc7070
--- /dev/null
+++ b/zoo/GDANet/main_ptseg.py
@@ -0,0 +1,439 @@
+from __future__ import print_function
+import os
+import argparse
+import torch
+import torch.optim as optim
+from torch.optim.lr_scheduler import CosineAnnealingLR, StepLR
+from util.data_util import PartNormalDataset, ShapeNetPart
+import torch.nn.functional as F
+import torch.nn as nn
+from model.GDANet_ptseg import GDANet
+import numpy as np
+from torch.utils.data import DataLoader
+from util.util import to_categorical, compute_overall_iou, IOStream
+from tqdm import tqdm
+from collections import defaultdict
+from torch.autograd import Variable
+import random
+
+
+classes_str = ['aero','bag','cap','car','chair','ear','guitar','knife','lamp','lapt','moto','mug','Pistol','rock','stake','table']
+
+
+def _init_():
+ if not os.path.exists('checkpoints'):
+ os.makedirs('checkpoints')
+ if not os.path.exists('checkpoints/' + args.exp_name):
+ os.makedirs('checkpoints/' + args.exp_name)
+
+ if not args.eval: # backup the running files
+ os.system('cp main_cls.py checkpoints' + '/' + args.exp_name + '/' + 'main.py.backup')
+ os.system('cp model/GDANet_ptseg.py checkpoints' + '/' + args.exp_name + '/' + 'GDANet_ptseg.py.backup')
+ os.system('cp util.GDANet_util.py checkpoints' + '/' + args.exp_name + '/' + 'GDANet_util.py.backup')
+ os.system('cp util.data_util.py checkpoints' + '/' + args.exp_name + '/' + 'data_util.py.backup')
+
+
+def weight_init(m):
+ if isinstance(m, torch.nn.Linear):
+ torch.nn.init.xavier_normal_(m.weight)
+ if m.bias is not None:
+ torch.nn.init.constant_(m.bias, 0)
+ elif isinstance(m, torch.nn.Conv2d):
+ torch.nn.init.xavier_normal_(m.weight)
+ if m.bias is not None:
+ torch.nn.init.constant_(m.bias, 0)
+ elif isinstance(m, torch.nn.Conv1d):
+ torch.nn.init.xavier_normal_(m.weight)
+ if m.bias is not None:
+ torch.nn.init.constant_(m.bias, 0)
+ elif isinstance(m, torch.nn.BatchNorm2d):
+ torch.nn.init.constant_(m.weight, 1)
+ torch.nn.init.constant_(m.bias, 0)
+ elif isinstance(m, torch.nn.BatchNorm1d):
+ torch.nn.init.constant_(m.weight, 1)
+ torch.nn.init.constant_(m.bias, 0)
+
+
+def train(args, io):
+
+ # ============= Model ===================
+ num_part = 50
+ device = torch.device("cuda" if args.cuda else "cpu")
+
+ torch.backends.cudnn.enabled = False ###
+
+ model = GDANet(num_part).to(device)
+ io.cprint(str(model))
+
+ model.apply(weight_init)
+ model = nn.DataParallel(model)
+ print("Let's use", torch.cuda.device_count(), "GPUs!")
+
+ '''Resume or not'''
+ if args.resume:
+ state_dict = torch.load("checkpoints/%s/best_insiou_model.pth" % args.exp_name,
+ map_location=torch.device('cpu'))['model']
+ for k in state_dict.keys():
+ if 'module' not in k:
+ from collections import OrderedDict
+ new_state_dict = OrderedDict()
+ for k in state_dict:
+ new_state_dict['module.' + k] = state_dict[k]
+ state_dict = new_state_dict
+ break
+ model.load_state_dict(state_dict)
+
+ print("Resume training model...")
+ print(torch.load("checkpoints/%s/best_insiou_model.pth" % args.exp_name).keys())
+ else:
+ print("Training from scratch...")
+
+ # =========== Dataloader =================
+ # train_data = PartNormalDataset(npoints=2048, split='trainval', normalize=False)
+ train_data = ShapeNetPart(partition='trainval', num_points=2048, class_choice=None)
+ print("The number of training data is:%d", len(train_data))
+
+ # test_data = PartNormalDataset(npoints=2048, split='test', normalize=False)
+ test_data = ShapeNetPart(partition='test', num_points=2048, class_choice=None)
+ print("The number of test data is:%d", len(test_data))
+
+ train_loader = DataLoader(train_data, batch_size=args.batch_size, shuffle=True, num_workers=6, drop_last=True)
+
+ test_loader = DataLoader(test_data, batch_size=args.test_batch_size, shuffle=False, num_workers=6, drop_last=False)
+
+ # ============= Optimizer ================
+ if args.use_sgd:
+ print("Use SGD")
+ opt = optim.SGD(model.parameters(), lr=args.lr*100, momentum=args.momentum, weight_decay=0)
+ else:
+ print("Use Adam")
+ opt = optim.Adam(model.parameters(), lr=args.lr, betas=(0.9, 0.999), eps=1e-08, weight_decay=0)
+
+ if args.scheduler == 'cos':
+ print("Use CosLR")
+ scheduler = CosineAnnealingLR(opt, args.epochs, eta_min=args.lr if args.use_sgd else args.lr / 100)
+ else:
+ print("Use StepLR")
+ scheduler = StepLR(opt, step_size=args.step, gamma=0.5)
+
+ # ============= Training =================
+ best_acc = 0
+ best_class_iou = 0
+ best_instance_iou = 0
+ num_part = 50
+ num_classes = 16
+
+ for epoch in range(args.epochs):
+
+ train_epoch(train_loader, model, opt, scheduler, epoch, num_part, num_classes, io)
+
+ test_metrics, total_per_cat_iou = test_epoch(test_loader, model, epoch, num_part, num_classes, io)
+
+ # 1. when get the best accuracy, save the model:
+ if test_metrics['accuracy'] > best_acc:
+ best_acc = test_metrics['accuracy']
+ io.cprint('Max Acc:%.5f' % best_acc)
+ state = {
+ 'model': model.module.state_dict() if torch.cuda.device_count() > 1 else model.state_dict(),
+ 'optimizer': opt.state_dict(), 'epoch': epoch, 'test_acc': best_acc}
+ torch.save(state, 'checkpoints/%s/best_acc_model.pth' % args.exp_name)
+
+ # 2. when get the best instance_iou, save the model:
+ if test_metrics['shape_avg_iou'] > best_instance_iou:
+ best_instance_iou = test_metrics['shape_avg_iou']
+ io.cprint('Max instance iou:%.5f' % best_instance_iou)
+ state = {
+ 'model': model.module.state_dict() if torch.cuda.device_count() > 1 else model.state_dict(),
+ 'optimizer': opt.state_dict(), 'epoch': epoch, 'test_instance_iou': best_instance_iou}
+ torch.save(state, 'checkpoints/%s/best_insiou_model.pth' % args.exp_name)
+
+ # 3. when get the best class_iou, save the model:
+ # first we need to calculate the average per-class iou
+ class_iou = 0
+ for cat_idx in range(16):
+ class_iou += total_per_cat_iou[cat_idx]
+ avg_class_iou = class_iou / 16
+ if avg_class_iou > best_class_iou:
+ best_class_iou = avg_class_iou
+ # print the iou of each class:
+ for cat_idx in range(16):
+ io.cprint(classes_str[cat_idx] + ' iou: ' + str(total_per_cat_iou[cat_idx]))
+ io.cprint('Max class iou:%.5f' % best_class_iou)
+ state = {
+ 'model': model.module.state_dict() if torch.cuda.device_count() > 1 else model.state_dict(),
+ 'optimizer': opt.state_dict(), 'epoch': epoch, 'test_class_iou': best_class_iou}
+ torch.save(state, 'checkpoints/%s/best_clsiou_model.pth' % args.exp_name)
+
+ # report best acc, ins_iou, cls_iou
+ io.cprint('Final Max Acc:%.5f' % best_acc)
+ io.cprint('Final Max instance iou:%.5f' % best_instance_iou)
+ io.cprint('Final Max class iou:%.5f' % best_class_iou)
+ # save last model
+ state = {
+ 'model': model.module.state_dict() if torch.cuda.device_count() > 1 else model.state_dict(),
+ 'optimizer': opt.state_dict(), 'epoch': args.epochs - 1, 'test_iou': best_instance_iou}
+ torch.save(state, 'checkpoints/%s/model_ep%d.pth' % (args.exp_name, args.epochs))
+
+
+def train_epoch(train_loader, model, opt, scheduler, epoch, num_part, num_classes, io):
+ train_loss = 0.0
+ count = 0.0
+ accuracy = []
+ shape_ious = 0.0
+ metrics = defaultdict(lambda: list())
+ model.train()
+
+ for batch_id, (points, label, target) in tqdm(enumerate(train_loader), total=len(train_loader), smoothing=0.9):
+ batch_size, num_point, _ = points.size()
+ points, label, target = Variable(points.float()), Variable(label.long()), Variable(target.long())
+ points = points.transpose(2, 1)
+ points, label, target = points.cuda(non_blocking=True), label.squeeze(1).cuda(non_blocking=True), target.cuda(non_blocking=True)
+ # target: b,n
+ seg_pred = model(points, to_categorical(label, num_classes)) # seg_pred: b,n,50
+
+ # instance iou without considering the class average at each batch_size:
+ batch_shapeious = compute_overall_iou(seg_pred, target, num_part) # list of of current batch_iou:[iou1,iou2,...,iou#b_size]
+ # total iou of current batch in each process:
+ batch_shapeious = seg_pred.new_tensor([np.sum(batch_shapeious)], dtype=torch.float64) # same device with seg_pred!!!
+
+ # loss
+ seg_pred = seg_pred.contiguous().view(-1, num_part) # b*n,50
+ target = target.view(-1, 1)[:, 0] # b*n
+ loss = F.nll_loss(seg_pred, target)
+
+ # loss backward
+ loss = torch.mean(loss)
+ opt.zero_grad()
+ loss.backward()
+ opt.step()
+
+ # accuracy
+ pred_choice = seg_pred.contiguous().data.max(1)[1] # b*n
+ correct = pred_choice.eq(target.contiguous().data).sum() # torch.int64: total number of correct-predict pts
+
+ # sum
+ shape_ious += batch_shapeious.item() # count the sum of ious in each iteration
+ count += batch_size # count the total number of samples in each iteration
+ train_loss += loss.item() * batch_size
+ accuracy.append(correct.item()/(batch_size * num_point)) # append the accuracy of each iteration
+
+ # Note: We do not need to calculate per_class iou during training
+
+ if args.scheduler == 'cos':
+ scheduler.step()
+ elif args.scheduler == 'step':
+ if opt.param_groups[0]['lr'] > 0.9e-5:
+ scheduler.step()
+ if opt.param_groups[0]['lr'] < 0.9e-5:
+ for param_group in opt.param_groups:
+ param_group['lr'] = 0.9e-5
+ io.cprint('Learning rate: %f' % opt.param_groups[0]['lr'])
+
+ metrics['accuracy'] = np.mean(accuracy)
+ metrics['shape_avg_iou'] = shape_ious * 1.0 / count
+
+ outstr = 'Train %d, loss: %f, train acc: %f, train ins_iou: %f' % (epoch+1, train_loss * 1.0 / count, metrics['accuracy'], metrics['shape_avg_iou'])
+ io.cprint(outstr)
+
+
+def test_epoch(test_loader, model, epoch, num_part, num_classes, io):
+ test_loss = 0.0
+ count = 0.0
+ accuracy = []
+ shape_ious = 0.0
+ final_total_per_cat_iou = np.zeros(16).astype(np.float32)
+ final_total_per_cat_seen = np.zeros(16).astype(np.int32)
+ metrics = defaultdict(lambda: list())
+ model.eval()
+
+ # label_size: b, means each sample has one corresponding class
+ for batch_id, (points, label, target) in tqdm(enumerate(test_loader), total=len(test_loader), smoothing=0.9):
+ batch_size, num_point, _ = points.size()
+ points, label, target = Variable(points.float()), Variable(label.long()), Variable(target.long())
+ points = points.transpose(2, 1)
+ points, label, target = points.cuda(non_blocking=True), label.squeeze(1).cuda(non_blocking=True), target.cuda(non_blocking=True)
+ seg_pred = model(points, to_categorical(label, num_classes)) # b,n,50
+
+ # instance iou without considering the class average at each batch_size:
+ batch_shapeious = compute_overall_iou(seg_pred, target, num_part) # [b]
+ # per category iou at each batch_size:
+
+ for shape_idx in range(seg_pred.size(0)): # sample_idx
+ cur_gt_label = label[shape_idx] # label[sample_idx], denotes current sample belongs to which cat
+ final_total_per_cat_iou[cur_gt_label] += batch_shapeious[shape_idx] # add the iou belongs to this cat
+ final_total_per_cat_seen[cur_gt_label] += 1 # count the number of this cat is chosen
+
+ # total iou of current batch in each process:
+ batch_ious = seg_pred.new_tensor([np.sum(batch_shapeious)], dtype=torch.float64) # same device with seg_pred!!!
+
+ # prepare seg_pred and target for later calculating loss and acc:
+ seg_pred = seg_pred.contiguous().view(-1, num_part)
+ target = target.view(-1, 1)[:, 0]
+ # Loss
+ loss = F.nll_loss(seg_pred.contiguous(), target.contiguous())
+
+ # accuracy:
+ pred_choice = seg_pred.data.max(1)[1] # b*n
+ correct = pred_choice.eq(target.data).sum() # torch.int64: total number of correct-predict pts
+
+ loss = torch.mean(loss)
+ shape_ious += batch_ious.item() # count the sum of ious in each iteration
+ count += batch_size # count the total number of samples in each iteration
+ test_loss += loss.item() * batch_size
+ accuracy.append(correct.item() / (batch_size * num_point)) # append the accuracy of each iteration
+
+ for cat_idx in range(16):
+ if final_total_per_cat_seen[cat_idx] > 0: # indicating this cat is included during previous iou appending
+ final_total_per_cat_iou[cat_idx] = final_total_per_cat_iou[cat_idx] / final_total_per_cat_seen[cat_idx] # avg class iou across all samples
+
+ metrics['accuracy'] = np.mean(accuracy)
+ metrics['shape_avg_iou'] = shape_ious * 1.0 / count
+
+ outstr = 'Test %d, loss: %f, test acc: %f test ins_iou: %f' % (epoch + 1, test_loss * 1.0 / count,
+ metrics['accuracy'], metrics['shape_avg_iou'])
+
+ io.cprint(outstr)
+
+ return metrics, final_total_per_cat_iou
+
+
+def test(args, io):
+ # Dataloader
+ # test_data = PartNormalDataset(npoints=2048, split='test', normalize=False)
+ test_data = ShapeNetPart(partition='test', num_points=2048, class_choice=None)
+ print("The number of test data is:%d", len(test_data))
+
+ test_loader = DataLoader(test_data, batch_size=args.test_batch_size, shuffle=False, num_workers=6, drop_last=False)
+
+ # Try to load models
+ num_part = 50
+ device = torch.device("cuda" if args.cuda else "cpu")
+
+ model = GDANet(num_part).to(device)
+ io.cprint(str(model))
+
+ from collections import OrderedDict
+ state_dict = torch.load("checkpoints/%s/best_%s_model.pth" % (args.exp_name, args.model_type),
+ map_location=torch.device('cpu'))['model']
+
+ new_state_dict = OrderedDict()
+ for layer in state_dict:
+ new_state_dict[layer.replace('module.', '')] = state_dict[layer]
+ model.load_state_dict(new_state_dict)
+
+ model.eval()
+ num_part = 50
+ num_classes = 16
+ metrics = defaultdict(lambda: list())
+ hist_acc = []
+ shape_ious = []
+ total_per_cat_iou = np.zeros((16)).astype(np.float32)
+ total_per_cat_seen = np.zeros((16)).astype(np.int32)
+
+ for batch_id, (points, label, target) in tqdm(enumerate(test_loader), total=len(test_loader), smoothing=0.9):
+ batch_size, num_point, _ = points.size()
+ points, label, target = Variable(points.float()), Variable(label.long()), Variable(target.long())
+ points = points.transpose(2, 1)
+ points, label, target = points.cuda(non_blocking=True), label.squeeze().cuda(non_blocking=True), target.cuda(non_blocking=True)
+
+ with torch.no_grad():
+ seg_pred = model(points, to_categorical(label, num_classes)) # b,n,50
+
+ # instance iou without considering the class average at each batch_size:
+ batch_shapeious = compute_overall_iou(seg_pred, target, num_part) # [b]
+ shape_ious += batch_shapeious # iou +=, equals to .append
+
+ # per category iou at each batch_size:
+ for shape_idx in range(seg_pred.size(0)): # sample_idx
+ cur_gt_label = label[shape_idx] # label[sample_idx]
+ total_per_cat_iou[cur_gt_label] += batch_shapeious[shape_idx]
+ total_per_cat_seen[cur_gt_label] += 1
+
+ # accuracy:
+ seg_pred = seg_pred.contiguous().view(-1, num_part)
+ target = target.view(-1, 1)[:, 0]
+ pred_choice = seg_pred.data.max(1)[1]
+ correct = pred_choice.eq(target.data).cpu().sum()
+ metrics['accuracy'].append(correct.item() / (batch_size * num_point))
+
+ hist_acc += metrics['accuracy']
+ metrics['accuracy'] = np.mean(hist_acc)
+ metrics['shape_avg_iou'] = np.mean(shape_ious)
+ for cat_idx in range(16):
+ if total_per_cat_seen[cat_idx] > 0:
+ total_per_cat_iou[cat_idx] = total_per_cat_iou[cat_idx] / total_per_cat_seen[cat_idx]
+
+ # First we need to calculate the iou of each class and the avg class iou:
+ class_iou = 0
+ for cat_idx in range(16):
+ class_iou += total_per_cat_iou[cat_idx]
+ io.cprint(classes_str[cat_idx] + ' iou: ' + str(total_per_cat_iou[cat_idx])) # print the iou of each class
+ avg_class_iou = class_iou / 16
+ outstr = 'Test :: test acc: %f test class mIOU: %f, test instance mIOU: %f' % (metrics['accuracy'], avg_class_iou, metrics['shape_avg_iou'])
+ io.cprint(outstr)
+
+
+if __name__ == "__main__":
+ # Training settings
+ parser = argparse.ArgumentParser(description='3D Shape Part Segmentation')
+ parser.add_argument('--exp_name', type=str, default='GDANet', metavar='N',
+ help='Name of the experiment')
+ parser.add_argument('--batch_size', type=int, default=64, metavar='batch_size',
+ help='Size of batch)')
+ parser.add_argument('--test_batch_size', type=int, default=32, metavar='batch_size',
+ help='Size of batch)')
+ parser.add_argument('--epochs', type=int, default=350, metavar='N',
+ help='number of episode to train')
+ parser.add_argument('--use_sgd', type=bool, default=False,
+ help='Use SGD')
+ parser.add_argument('--scheduler', type=str, default='step',
+ help='lr scheduler')
+ parser.add_argument('--step', type=int, default=40,
+ help='lr decay step')
+ parser.add_argument('--lr', type=float, default=0.003, metavar='LR',
+ help='learning rate')
+ parser.add_argument('--momentum', type=float, default=0.9, metavar='M',
+ help='SGD momentum (default: 0.9)')
+ parser.add_argument('--no_cuda', type=bool, default=False,
+ help='enables CUDA training')
+ parser.add_argument('--manual_seed', type=int, default=1, metavar='S',
+ help='random seed (default: 1)')
+ parser.add_argument('--eval', type=bool, default=False,
+ help='evaluate the model')
+ parser.add_argument('--num_points', type=int, default=1024,
+ help='num of points to use')
+ parser.add_argument('--resume', type=bool, default=False,
+ help='Resume training or not')
+ parser.add_argument('--model_type', type=str, default='insiou',
+ help='choose to test the best insiou/clsiou/acc model (options: insiou, clsiou, acc)')
+
+ args = parser.parse_args()
+
+ _init_()
+
+ if not args.eval:
+ io = IOStream('checkpoints/' + args.exp_name + '/%s_train.log' % (args.exp_name))
+ else:
+ io = IOStream('checkpoints/' + args.exp_name + '/%s_test.log' % (args.exp_name))
+ io.cprint(str(args))
+
+ if args.manual_seed is not None:
+ random.seed(args.manual_seed)
+ np.random.seed(args.manual_seed)
+ torch.manual_seed(args.manual_seed)
+
+ args.cuda = not args.no_cuda and torch.cuda.is_available()
+
+ if args.cuda:
+ io.cprint('Using GPU')
+ if args.manual_seed is not None:
+ torch.cuda.manual_seed(args.manual_seed)
+ torch.cuda.manual_seed_all(args.manual_seed)
+ else:
+ io.cprint('Using CPU')
+
+ if not args.eval:
+ train(args, io)
+ else:
+ test(args, io)
+
diff --git a/zoo/GDANet/model/GDANet_cls.py b/zoo/GDANet/model/GDANet_cls.py
new file mode 100644
index 0000000..9e70725
--- /dev/null
+++ b/zoo/GDANet/model/GDANet_cls.py
@@ -0,0 +1,113 @@
+import torch.nn as nn
+import torch
+import torch.nn.functional as F
+from util.GDANet_util import local_operator, GDM, SGCAM
+
+
+class GDANET(nn.Module):
+ def __init__(self):
+ super(GDANET, self).__init__()
+
+ self.bn1 = nn.BatchNorm2d(64, momentum=0.1)
+ self.bn11 = nn.BatchNorm2d(64, momentum=0.1)
+ self.bn12 = nn.BatchNorm1d(64, momentum=0.1)
+
+ self.bn2 = nn.BatchNorm2d(64, momentum=0.1)
+ self.bn21 = nn.BatchNorm2d(64, momentum=0.1)
+ self.bn22 = nn.BatchNorm1d(64, momentum=0.1)
+
+ self.bn3 = nn.BatchNorm2d(128, momentum=0.1)
+ self.bn31 = nn.BatchNorm2d(128, momentum=0.1)
+ self.bn32 = nn.BatchNorm1d(128, momentum=0.1)
+
+ self.bn4 = nn.BatchNorm1d(512, momentum=0.1)
+
+ self.conv1 = nn.Sequential(nn.Conv2d(6, 64, kernel_size=1, bias=True),
+ self.bn1)
+ self.conv11 = nn.Sequential(nn.Conv2d(64, 64, kernel_size=1, bias=True),
+ self.bn11)
+ self.conv12 = nn.Sequential(nn.Conv1d(64 * 2, 64, kernel_size=1, bias=True),
+ self.bn12)
+
+ self.conv2 = nn.Sequential(nn.Conv2d(67 * 2, 64, kernel_size=1, bias=True),
+ self.bn2)
+ self.conv21 = nn.Sequential(nn.Conv2d(64, 64, kernel_size=1, bias=True),
+ self.bn21)
+ self.conv22 = nn.Sequential(nn.Conv1d(64 * 2, 64, kernel_size=1, bias=True),
+ self.bn22)
+
+ self.conv3 = nn.Sequential(nn.Conv2d(131 * 2, 128, kernel_size=1, bias=True),
+ self.bn3)
+ self.conv31 = nn.Sequential(nn.Conv2d(128, 128, kernel_size=1, bias=True),
+ self.bn31)
+ self.conv32 = nn.Sequential(nn.Conv1d(128, 128, kernel_size=1, bias=True),
+ self.bn32)
+
+ self.conv4 = nn.Sequential(nn.Conv1d(256, 512, kernel_size=1, bias=True),
+ self.bn4)
+
+ self.SGCAM_1s = SGCAM(64)
+ self.SGCAM_1g = SGCAM(64)
+ self.SGCAM_2s = SGCAM(64)
+ self.SGCAM_2g = SGCAM(64)
+
+ self.linear1 = nn.Linear(1024, 512, bias=True)
+ self.bn6 = nn.BatchNorm1d(512)
+ self.dp1 = nn.Dropout(p=0.4)
+ self.linear2 = nn.Linear(512, 256, bias=True)
+ self.bn7 = nn.BatchNorm1d(256)
+ self.dp2 = nn.Dropout(p=0.4)
+ self.linear3 = nn.Linear(256, 40, bias=True)
+
+ def forward(self, x):
+ B, C, N = x.size()
+ ###############
+ """block 1"""
+ # Local operator:
+ x1 = local_operator(x, k=30)
+ x1 = F.relu(self.conv1(x1))
+ x1 = F.relu(self.conv11(x1))
+ x1 = x1.max(dim=-1, keepdim=False)[0]
+
+ # Geometry-Disentangle Module:
+ x1s, x1g = GDM(x1, M=256)
+
+ # Sharp-Gentle Complementary Attention Module:
+ y1s = self.SGCAM_1s(x1, x1s.transpose(2, 1))
+ y1g = self.SGCAM_1g(x1, x1g.transpose(2, 1))
+ z1 = torch.cat([y1s, y1g], 1)
+ z1 = F.relu(self.conv12(z1))
+ ###############
+ """block 2"""
+ x1t = torch.cat((x, z1), dim=1)
+ x2 = local_operator(x1t, k=30)
+ x2 = F.relu(self.conv2(x2))
+ x2 = F.relu(self.conv21(x2))
+ x2 = x2.max(dim=-1, keepdim=False)[0]
+
+ x2s, x2g = GDM(x2, M=256)
+
+ y2s = self.SGCAM_2s(x2, x2s.transpose(2, 1))
+ y2g = self.SGCAM_2g(x2, x2g.transpose(2, 1))
+ z2 = torch.cat([y2s, y2g], 1)
+ z2 = F.relu(self.conv22(z2))
+ ###############
+ x2t = torch.cat((x1t, z2), dim=1)
+ x3 = local_operator(x2t, k=30)
+ x3 = F.relu(self.conv3(x3))
+ x3 = F.relu(self.conv31(x3))
+ x3 = x3.max(dim=-1, keepdim=False)[0]
+ z3 = F.relu(self.conv32(x3))
+ ###############
+ x = torch.cat((z1, z2, z3), dim=1)
+ x = F.relu(self.conv4(x))
+ x11 = F.adaptive_max_pool1d(x, 1).view(B, -1)
+ x22 = F.adaptive_avg_pool1d(x, 1).view(B, -1)
+ x = torch.cat((x11, x22), 1)
+
+ x = F.relu(self.bn6(self.linear1(x)))
+ x = self.dp1(x)
+ x = F.relu(self.bn7(self.linear2(x)))
+ x = self.dp2(x)
+ x = self.linear3(x)
+ return x
diff --git a/zoo/GDANet/model/GDANet_ptseg.py b/zoo/GDANet/model/GDANet_ptseg.py
new file mode 100644
index 0000000..e1bfe8c
--- /dev/null
+++ b/zoo/GDANet/model/GDANet_ptseg.py
@@ -0,0 +1,127 @@
+import torch.nn as nn
+import torch
+import torch.nn.functional as F
+from util.GDANet_util import local_operator_withnorm, local_operator, GDM, SGCAM
+
+
+class GDANet(nn.Module):
+ def __init__(self, num_classes):
+ super(GDANet, self).__init__()
+
+ self.bn1 = nn.BatchNorm2d(64, momentum=0.1)
+ self.bn11 = nn.BatchNorm2d(64, momentum=0.1)
+ self.bn12 = nn.BatchNorm1d(64, momentum=0.1)
+
+ self.bn2 = nn.BatchNorm2d(64, momentum=0.1)
+ self.bn21 = nn.BatchNorm2d(64, momentum=0.1)
+ self.bn22 = nn.BatchNorm1d(64, momentum=0.1)
+
+ self.bn3 = nn.BatchNorm2d(128, momentum=0.1)
+ self.bn31 = nn.BatchNorm2d(128, momentum=0.1)
+ self.bn32 = nn.BatchNorm1d(128, momentum=0.1)
+
+ self.bn4 = nn.BatchNorm1d(512, momentum=0.1)
+ self.bnc = nn.BatchNorm1d(64, momentum=0.1)
+
+ self.bn5 = nn.BatchNorm1d(256, momentum=0.1)
+ self.bn6 = nn.BatchNorm1d(256, momentum=0.1)
+ self.bn7 = nn.BatchNorm1d(128, momentum=0.1)
+
+ self.conv1 = nn.Sequential(nn.Conv2d(9, 64, kernel_size=1, bias=True),
+ self.bn1)
+ self.conv11 = nn.Sequential(nn.Conv2d(64, 64, kernel_size=1, bias=True),
+ self.bn11)
+ self.conv12 = nn.Sequential(nn.Conv1d(64*2, 64, kernel_size=1, bias=True),
+ self.bn12)
+
+ self.conv2 = nn.Sequential(nn.Conv2d(67 * 2, 64, kernel_size=1, bias=True),
+ self.bn2)
+ self.conv21 = nn.Sequential(nn.Conv2d(64, 64, kernel_size=1, bias=True),
+ self.bn21)
+ self.conv22 = nn.Sequential(nn.Conv1d(64*2, 64, kernel_size=1, bias=True),
+ self.bn22)
+
+ self.conv3 = nn.Sequential(nn.Conv2d(131 * 2, 128, kernel_size=1, bias=True),
+ self.bn3)
+ self.conv31 = nn.Sequential(nn.Conv2d(128, 128, kernel_size=1, bias=True),
+ self.bn31)
+ self.conv32 = nn.Sequential(nn.Conv1d(128, 128, kernel_size=1, bias=True),
+ self.bn32)
+
+ self.conv4 = nn.Sequential(nn.Conv1d(256, 512, kernel_size=1, bias=True),
+ self.bn4)
+ self.convc = nn.Sequential(nn.Conv1d(16, 64, kernel_size=1, bias=True),
+ self.bnc)
+
+ self.conv5 = nn.Sequential(nn.Conv1d(256 + 512 + 64, 256, kernel_size=1, bias=True),
+ self.bn5)
+ self.dp1 = nn.Dropout(0.4)
+ self.conv6 = nn.Sequential(nn.Conv1d(256, 256, kernel_size=1, bias=True),
+ self.bn6)
+ self.dp2 = nn.Dropout(0.4)
+ self.conv7 = nn.Sequential(nn.Conv1d(256, 128, kernel_size=1, bias=True),
+ self.bn7)
+ self.conv8 = nn.Conv1d(128, num_classes, kernel_size=1, bias=True)
+
+ self.SGCAM_1s = SGCAM(64)
+ self.SGCAM_1g = SGCAM(64)
+ self.SGCAM_2s = SGCAM(64)
+ self.SGCAM_2g = SGCAM(64)
+
+ def forward(self, x, norm_plt, cls_label):
+ B, C, N = x.size()
+ ###############
+ """block 1"""
+ x1 = local_operator_withnorm(x, norm_plt, k=30)
+ x1 = F.relu(self.conv1(x1))
+ x1 = F.relu(self.conv11(x1))
+ x1 = x1.max(dim=-1, keepdim=False)[0]
+ x1h, x1l = GDM(x1, M=512)
+
+ x1h = self.SGCAM_1s(x1, x1h.transpose(2, 1))
+ x1l = self.SGCAM_1g(x1, x1l.transpose(2, 1))
+ x1 = torch.cat([x1h, x1l], 1)
+ x1 = F.relu(self.conv12(x1))
+ ###############
+ """block 1"""
+ x1t = torch.cat((x, x1), dim=1)
+ x2 = local_operator(x1t, k=30)
+ x2 = F.relu(self.conv2(x2))
+ x2 = F.relu(self.conv21(x2))
+ x2 = x2.max(dim=-1, keepdim=False)[0]
+ x2h, x2l = GDM(x2, M=512)
+
+ x2h = self.SGCAM_2s(x2, x2h.transpose(2, 1))
+ x2l = self.SGCAM_2g(x2, x2l.transpose(2, 1))
+ x2 = torch.cat([x2h, x2l], 1)
+ x2 = F.relu(self.conv22(x2))
+ ###############
+ x2t = torch.cat((x1t, x2), dim=1)
+ x3 = local_operator(x2t, k=30)
+ x3 = F.relu(self.conv3(x3))
+ x3 = F.relu(self.conv31(x3))
+ x3 = x3.max(dim=-1, keepdim=False)[0]
+ x3 = F.relu(self.conv32(x3))
+ ###############
+ xx = torch.cat((x1, x2, x3), dim=1)
+
+ xc = F.relu(self.conv4(xx))
+ xc = F.adaptive_max_pool1d(xc, 1).view(B, -1)
+
+ cls_label = cls_label.view(B, 16, 1)
+ cls_label = F.relu(self.convc(cls_label))
+ cls = torch.cat((xc.view(B, 512, 1), cls_label), dim=1)
+ cls = cls.repeat(1, 1, N)
+
+ x = torch.cat((xx, cls), dim=1)
+ x = F.relu(self.conv5(x))
+ x = self.dp1(x)
+ x = F.relu(self.conv6(x))
+ x = self.dp2(x)
+ x = F.relu(self.conv7(x))
+ x = self.conv8(x)
+ x = F.log_softmax(x, dim=1)
+ x = x.permute(0, 2, 1) # b,n,50
+
+ return x
+
diff --git a/zoo/GDANet/model/__init__.py b/zoo/GDANet/model/__init__.py
new file mode 100644
index 0000000..e69de29
diff --git a/zoo/GDANet/part_seg/main_ptseg.py b/zoo/GDANet/part_seg/main_ptseg.py
new file mode 100644
index 0000000..fcc7070
--- /dev/null
+++ b/zoo/GDANet/part_seg/main_ptseg.py
@@ -0,0 +1,439 @@
+from __future__ import print_function
+import os
+import argparse
+import torch
+import torch.optim as optim
+from torch.optim.lr_scheduler import CosineAnnealingLR, StepLR
+from util.data_util import PartNormalDataset, ShapeNetPart
+import torch.nn.functional as F
+import torch.nn as nn
+from model.GDANet_ptseg import GDANet
+import numpy as np
+from torch.utils.data import DataLoader
+from util.util import to_categorical, compute_overall_iou, IOStream
+from tqdm import tqdm
+from collections import defaultdict
+from torch.autograd import Variable
+import random
+
+
+classes_str = ['aero','bag','cap','car','chair','ear','guitar','knife','lamp','lapt','moto','mug','Pistol','rock','stake','table']
+
+
+def _init_():
+ if not os.path.exists('checkpoints'):
+ os.makedirs('checkpoints')
+ if not os.path.exists('checkpoints/' + args.exp_name):
+ os.makedirs('checkpoints/' + args.exp_name)
+
+ if not args.eval: # backup the running files
+ os.system('cp main_cls.py checkpoints' + '/' + args.exp_name + '/' + 'main.py.backup')
+ os.system('cp model/GDANet_ptseg.py checkpoints' + '/' + args.exp_name + '/' + 'GDANet_ptseg.py.backup')
+ os.system('cp util.GDANet_util.py checkpoints' + '/' + args.exp_name + '/' + 'GDANet_util.py.backup')
+ os.system('cp util.data_util.py checkpoints' + '/' + args.exp_name + '/' + 'data_util.py.backup')
+
+
+def weight_init(m):
+ if isinstance(m, torch.nn.Linear):
+ torch.nn.init.xavier_normal_(m.weight)
+ if m.bias is not None:
+ torch.nn.init.constant_(m.bias, 0)
+ elif isinstance(m, torch.nn.Conv2d):
+ torch.nn.init.xavier_normal_(m.weight)
+ if m.bias is not None:
+ torch.nn.init.constant_(m.bias, 0)
+ elif isinstance(m, torch.nn.Conv1d):
+ torch.nn.init.xavier_normal_(m.weight)
+ if m.bias is not None:
+ torch.nn.init.constant_(m.bias, 0)
+ elif isinstance(m, torch.nn.BatchNorm2d):
+ torch.nn.init.constant_(m.weight, 1)
+ torch.nn.init.constant_(m.bias, 0)
+ elif isinstance(m, torch.nn.BatchNorm1d):
+ torch.nn.init.constant_(m.weight, 1)
+ torch.nn.init.constant_(m.bias, 0)
+
+
+def train(args, io):
+
+ # ============= Model ===================
+ num_part = 50
+ device = torch.device("cuda" if args.cuda else "cpu")
+
+ torch.backends.cudnn.enabled = False ###
+
+ model = GDANet(num_part).to(device)
+ io.cprint(str(model))
+
+ model.apply(weight_init)
+ model = nn.DataParallel(model)
+ print("Let's use", torch.cuda.device_count(), "GPUs!")
+
+ '''Resume or not'''
+ if args.resume:
+ state_dict = torch.load("checkpoints/%s/best_insiou_model.pth" % args.exp_name,
+ map_location=torch.device('cpu'))['model']
+ for k in state_dict.keys():
+ if 'module' not in k:
+ from collections import OrderedDict
+ new_state_dict = OrderedDict()
+ for k in state_dict:
+ new_state_dict['module.' + k] = state_dict[k]
+ state_dict = new_state_dict
+ break
+ model.load_state_dict(state_dict)
+
+ print("Resume training model...")
+ print(torch.load("checkpoints/%s/best_insiou_model.pth" % args.exp_name).keys())
+ else:
+ print("Training from scratch...")
+
+ # =========== Dataloader =================
+ # train_data = PartNormalDataset(npoints=2048, split='trainval', normalize=False)
+ train_data = ShapeNetPart(partition='trainval', num_points=2048, class_choice=None)
+ print("The number of training data is:%d", len(train_data))
+
+ # test_data = PartNormalDataset(npoints=2048, split='test', normalize=False)
+ test_data = ShapeNetPart(partition='test', num_points=2048, class_choice=None)
+ print("The number of test data is:%d", len(test_data))
+
+ train_loader = DataLoader(train_data, batch_size=args.batch_size, shuffle=True, num_workers=6, drop_last=True)
+
+ test_loader = DataLoader(test_data, batch_size=args.test_batch_size, shuffle=False, num_workers=6, drop_last=False)
+
+ # ============= Optimizer ================
+ if args.use_sgd:
+ print("Use SGD")
+ opt = optim.SGD(model.parameters(), lr=args.lr*100, momentum=args.momentum, weight_decay=0)
+ else:
+ print("Use Adam")
+ opt = optim.Adam(model.parameters(), lr=args.lr, betas=(0.9, 0.999), eps=1e-08, weight_decay=0)
+
+ if args.scheduler == 'cos':
+ print("Use CosLR")
+ scheduler = CosineAnnealingLR(opt, args.epochs, eta_min=args.lr if args.use_sgd else args.lr / 100)
+ else:
+ print("Use StepLR")
+ scheduler = StepLR(opt, step_size=args.step, gamma=0.5)
+
+ # ============= Training =================
+ best_acc = 0
+ best_class_iou = 0
+ best_instance_iou = 0
+ num_part = 50
+ num_classes = 16
+
+ for epoch in range(args.epochs):
+
+ train_epoch(train_loader, model, opt, scheduler, epoch, num_part, num_classes, io)
+
+ test_metrics, total_per_cat_iou = test_epoch(test_loader, model, epoch, num_part, num_classes, io)
+
+ # 1. when get the best accuracy, save the model:
+ if test_metrics['accuracy'] > best_acc:
+ best_acc = test_metrics['accuracy']
+ io.cprint('Max Acc:%.5f' % best_acc)
+ state = {
+ 'model': model.module.state_dict() if torch.cuda.device_count() > 1 else model.state_dict(),
+ 'optimizer': opt.state_dict(), 'epoch': epoch, 'test_acc': best_acc}
+ torch.save(state, 'checkpoints/%s/best_acc_model.pth' % args.exp_name)
+
+ # 2. when get the best instance_iou, save the model:
+ if test_metrics['shape_avg_iou'] > best_instance_iou:
+ best_instance_iou = test_metrics['shape_avg_iou']
+ io.cprint('Max instance iou:%.5f' % best_instance_iou)
+ state = {
+ 'model': model.module.state_dict() if torch.cuda.device_count() > 1 else model.state_dict(),
+ 'optimizer': opt.state_dict(), 'epoch': epoch, 'test_instance_iou': best_instance_iou}
+ torch.save(state, 'checkpoints/%s/best_insiou_model.pth' % args.exp_name)
+
+ # 3. when get the best class_iou, save the model:
+ # first we need to calculate the average per-class iou
+ class_iou = 0
+ for cat_idx in range(16):
+ class_iou += total_per_cat_iou[cat_idx]
+ avg_class_iou = class_iou / 16
+ if avg_class_iou > best_class_iou:
+ best_class_iou = avg_class_iou
+ # print the iou of each class:
+ for cat_idx in range(16):
+ io.cprint(classes_str[cat_idx] + ' iou: ' + str(total_per_cat_iou[cat_idx]))
+ io.cprint('Max class iou:%.5f' % best_class_iou)
+ state = {
+ 'model': model.module.state_dict() if torch.cuda.device_count() > 1 else model.state_dict(),
+ 'optimizer': opt.state_dict(), 'epoch': epoch, 'test_class_iou': best_class_iou}
+ torch.save(state, 'checkpoints/%s/best_clsiou_model.pth' % args.exp_name)
+
+ # report best acc, ins_iou, cls_iou
+ io.cprint('Final Max Acc:%.5f' % best_acc)
+ io.cprint('Final Max instance iou:%.5f' % best_instance_iou)
+ io.cprint('Final Max class iou:%.5f' % best_class_iou)
+ # save last model
+ state = {
+ 'model': model.module.state_dict() if torch.cuda.device_count() > 1 else model.state_dict(),
+ 'optimizer': opt.state_dict(), 'epoch': args.epochs - 1, 'test_iou': best_instance_iou}
+ torch.save(state, 'checkpoints/%s/model_ep%d.pth' % (args.exp_name, args.epochs))
+
+
+def train_epoch(train_loader, model, opt, scheduler, epoch, num_part, num_classes, io):
+ train_loss = 0.0
+ count = 0.0
+ accuracy = []
+ shape_ious = 0.0
+ metrics = defaultdict(lambda: list())
+ model.train()
+
+ for batch_id, (points, label, target) in tqdm(enumerate(train_loader), total=len(train_loader), smoothing=0.9):
+ batch_size, num_point, _ = points.size()
+ points, label, target = Variable(points.float()), Variable(label.long()), Variable(target.long())
+ points = points.transpose(2, 1)
+ points, label, target = points.cuda(non_blocking=True), label.squeeze(1).cuda(non_blocking=True), target.cuda(non_blocking=True)
+ # target: b,n
+ seg_pred = model(points, to_categorical(label, num_classes)) # seg_pred: b,n,50
+
+ # instance iou without considering the class average at each batch_size:
+ batch_shapeious = compute_overall_iou(seg_pred, target, num_part) # list of of current batch_iou:[iou1,iou2,...,iou#b_size]
+ # total iou of current batch in each process:
+ batch_shapeious = seg_pred.new_tensor([np.sum(batch_shapeious)], dtype=torch.float64) # same device with seg_pred!!!
+
+ # loss
+ seg_pred = seg_pred.contiguous().view(-1, num_part) # b*n,50
+ target = target.view(-1, 1)[:, 0] # b*n
+ loss = F.nll_loss(seg_pred, target)
+
+ # loss backward
+ loss = torch.mean(loss)
+ opt.zero_grad()
+ loss.backward()
+ opt.step()
+
+ # accuracy
+ pred_choice = seg_pred.contiguous().data.max(1)[1] # b*n
+ correct = pred_choice.eq(target.contiguous().data).sum() # torch.int64: total number of correct-predict pts
+
+ # sum
+ shape_ious += batch_shapeious.item() # count the sum of ious in each iteration
+ count += batch_size # count the total number of samples in each iteration
+ train_loss += loss.item() * batch_size
+ accuracy.append(correct.item()/(batch_size * num_point)) # append the accuracy of each iteration
+
+ # Note: We do not need to calculate per_class iou during training
+
+ if args.scheduler == 'cos':
+ scheduler.step()
+ elif args.scheduler == 'step':
+ if opt.param_groups[0]['lr'] > 0.9e-5:
+ scheduler.step()
+ if opt.param_groups[0]['lr'] < 0.9e-5:
+ for param_group in opt.param_groups:
+ param_group['lr'] = 0.9e-5
+ io.cprint('Learning rate: %f' % opt.param_groups[0]['lr'])
+
+ metrics['accuracy'] = np.mean(accuracy)
+ metrics['shape_avg_iou'] = shape_ious * 1.0 / count
+
+ outstr = 'Train %d, loss: %f, train acc: %f, train ins_iou: %f' % (epoch+1, train_loss * 1.0 / count, metrics['accuracy'], metrics['shape_avg_iou'])
+ io.cprint(outstr)
+
+
+def test_epoch(test_loader, model, epoch, num_part, num_classes, io):
+ test_loss = 0.0
+ count = 0.0
+ accuracy = []
+ shape_ious = 0.0
+ final_total_per_cat_iou = np.zeros(16).astype(np.float32)
+ final_total_per_cat_seen = np.zeros(16).astype(np.int32)
+ metrics = defaultdict(lambda: list())
+ model.eval()
+
+ # label_size: b, means each sample has one corresponding class
+ for batch_id, (points, label, target) in tqdm(enumerate(test_loader), total=len(test_loader), smoothing=0.9):
+ batch_size, num_point, _ = points.size()
+ points, label, target = Variable(points.float()), Variable(label.long()), Variable(target.long())
+ points = points.transpose(2, 1)
+ points, label, target = points.cuda(non_blocking=True), label.squeeze(1).cuda(non_blocking=True), target.cuda(non_blocking=True)
+ seg_pred = model(points, to_categorical(label, num_classes)) # b,n,50
+
+ # instance iou without considering the class average at each batch_size:
+ batch_shapeious = compute_overall_iou(seg_pred, target, num_part) # [b]
+ # per category iou at each batch_size:
+
+ for shape_idx in range(seg_pred.size(0)): # sample_idx
+ cur_gt_label = label[shape_idx] # label[sample_idx], denotes current sample belongs to which cat
+ final_total_per_cat_iou[cur_gt_label] += batch_shapeious[shape_idx] # add the iou belongs to this cat
+ final_total_per_cat_seen[cur_gt_label] += 1 # count the number of this cat is chosen
+
+ # total iou of current batch in each process:
+ batch_ious = seg_pred.new_tensor([np.sum(batch_shapeious)], dtype=torch.float64) # same device with seg_pred!!!
+
+ # prepare seg_pred and target for later calculating loss and acc:
+ seg_pred = seg_pred.contiguous().view(-1, num_part)
+ target = target.view(-1, 1)[:, 0]
+ # Loss
+ loss = F.nll_loss(seg_pred.contiguous(), target.contiguous())
+
+ # accuracy:
+ pred_choice = seg_pred.data.max(1)[1] # b*n
+ correct = pred_choice.eq(target.data).sum() # torch.int64: total number of correct-predict pts
+
+ loss = torch.mean(loss)
+ shape_ious += batch_ious.item() # count the sum of ious in each iteration
+ count += batch_size # count the total number of samples in each iteration
+ test_loss += loss.item() * batch_size
+ accuracy.append(correct.item() / (batch_size * num_point)) # append the accuracy of each iteration
+
+ for cat_idx in range(16):
+ if final_total_per_cat_seen[cat_idx] > 0: # indicating this cat is included during previous iou appending
+ final_total_per_cat_iou[cat_idx] = final_total_per_cat_iou[cat_idx] / final_total_per_cat_seen[cat_idx] # avg class iou across all samples
+
+ metrics['accuracy'] = np.mean(accuracy)
+ metrics['shape_avg_iou'] = shape_ious * 1.0 / count
+
+ outstr = 'Test %d, loss: %f, test acc: %f test ins_iou: %f' % (epoch + 1, test_loss * 1.0 / count,
+ metrics['accuracy'], metrics['shape_avg_iou'])
+
+ io.cprint(outstr)
+
+ return metrics, final_total_per_cat_iou
+
+
+def test(args, io):
+ # Dataloader
+ # test_data = PartNormalDataset(npoints=2048, split='test', normalize=False)
+ test_data = ShapeNetPart(partition='test', num_points=2048, class_choice=None)
+ print("The number of test data is:%d", len(test_data))
+
+ test_loader = DataLoader(test_data, batch_size=args.test_batch_size, shuffle=False, num_workers=6, drop_last=False)
+
+ # Try to load models
+ num_part = 50
+ device = torch.device("cuda" if args.cuda else "cpu")
+
+ model = GDANet(num_part).to(device)
+ io.cprint(str(model))
+
+ from collections import OrderedDict
+ state_dict = torch.load("checkpoints/%s/best_%s_model.pth" % (args.exp_name, args.model_type),
+ map_location=torch.device('cpu'))['model']
+
+ new_state_dict = OrderedDict()
+ for layer in state_dict:
+ new_state_dict[layer.replace('module.', '')] = state_dict[layer]
+ model.load_state_dict(new_state_dict)
+
+ model.eval()
+ num_part = 50
+ num_classes = 16
+ metrics = defaultdict(lambda: list())
+ hist_acc = []
+ shape_ious = []
+ total_per_cat_iou = np.zeros((16)).astype(np.float32)
+ total_per_cat_seen = np.zeros((16)).astype(np.int32)
+
+ for batch_id, (points, label, target) in tqdm(enumerate(test_loader), total=len(test_loader), smoothing=0.9):
+ batch_size, num_point, _ = points.size()
+ points, label, target = Variable(points.float()), Variable(label.long()), Variable(target.long())
+ points = points.transpose(2, 1)
+ points, label, target = points.cuda(non_blocking=True), label.squeeze().cuda(non_blocking=True), target.cuda(non_blocking=True)
+
+ with torch.no_grad():
+ seg_pred = model(points, to_categorical(label, num_classes)) # b,n,50
+
+ # instance iou without considering the class average at each batch_size:
+ batch_shapeious = compute_overall_iou(seg_pred, target, num_part) # [b]
+ shape_ious += batch_shapeious # iou +=, equals to .append
+
+ # per category iou at each batch_size:
+ for shape_idx in range(seg_pred.size(0)): # sample_idx
+ cur_gt_label = label[shape_idx] # label[sample_idx]
+ total_per_cat_iou[cur_gt_label] += batch_shapeious[shape_idx]
+ total_per_cat_seen[cur_gt_label] += 1
+
+ # accuracy:
+ seg_pred = seg_pred.contiguous().view(-1, num_part)
+ target = target.view(-1, 1)[:, 0]
+ pred_choice = seg_pred.data.max(1)[1]
+ correct = pred_choice.eq(target.data).cpu().sum()
+ metrics['accuracy'].append(correct.item() / (batch_size * num_point))
+
+ hist_acc += metrics['accuracy']
+ metrics['accuracy'] = np.mean(hist_acc)
+ metrics['shape_avg_iou'] = np.mean(shape_ious)
+ for cat_idx in range(16):
+ if total_per_cat_seen[cat_idx] > 0:
+ total_per_cat_iou[cat_idx] = total_per_cat_iou[cat_idx] / total_per_cat_seen[cat_idx]
+
+ # First we need to calculate the iou of each class and the avg class iou:
+ class_iou = 0
+ for cat_idx in range(16):
+ class_iou += total_per_cat_iou[cat_idx]
+ io.cprint(classes_str[cat_idx] + ' iou: ' + str(total_per_cat_iou[cat_idx])) # print the iou of each class
+ avg_class_iou = class_iou / 16
+ outstr = 'Test :: test acc: %f test class mIOU: %f, test instance mIOU: %f' % (metrics['accuracy'], avg_class_iou, metrics['shape_avg_iou'])
+ io.cprint(outstr)
+
+
+if __name__ == "__main__":
+ # Training settings
+ parser = argparse.ArgumentParser(description='3D Shape Part Segmentation')
+ parser.add_argument('--exp_name', type=str, default='GDANet', metavar='N',
+ help='Name of the experiment')
+ parser.add_argument('--batch_size', type=int, default=64, metavar='batch_size',
+ help='Size of batch)')
+ parser.add_argument('--test_batch_size', type=int, default=32, metavar='batch_size',
+ help='Size of batch)')
+ parser.add_argument('--epochs', type=int, default=350, metavar='N',
+ help='number of episode to train')
+ parser.add_argument('--use_sgd', type=bool, default=False,
+ help='Use SGD')
+ parser.add_argument('--scheduler', type=str, default='step',
+ help='lr scheduler')
+ parser.add_argument('--step', type=int, default=40,
+ help='lr decay step')
+ parser.add_argument('--lr', type=float, default=0.003, metavar='LR',
+ help='learning rate')
+ parser.add_argument('--momentum', type=float, default=0.9, metavar='M',
+ help='SGD momentum (default: 0.9)')
+ parser.add_argument('--no_cuda', type=bool, default=False,
+ help='enables CUDA training')
+ parser.add_argument('--manual_seed', type=int, default=1, metavar='S',
+ help='random seed (default: 1)')
+ parser.add_argument('--eval', type=bool, default=False,
+ help='evaluate the model')
+ parser.add_argument('--num_points', type=int, default=1024,
+ help='num of points to use')
+ parser.add_argument('--resume', type=bool, default=False,
+ help='Resume training or not')
+ parser.add_argument('--model_type', type=str, default='insiou',
+ help='choose to test the best insiou/clsiou/acc model (options: insiou, clsiou, acc)')
+
+ args = parser.parse_args()
+
+ _init_()
+
+ if not args.eval:
+ io = IOStream('checkpoints/' + args.exp_name + '/%s_train.log' % (args.exp_name))
+ else:
+ io = IOStream('checkpoints/' + args.exp_name + '/%s_test.log' % (args.exp_name))
+ io.cprint(str(args))
+
+ if args.manual_seed is not None:
+ random.seed(args.manual_seed)
+ np.random.seed(args.manual_seed)
+ torch.manual_seed(args.manual_seed)
+
+ args.cuda = not args.no_cuda and torch.cuda.is_available()
+
+ if args.cuda:
+ io.cprint('Using GPU')
+ if args.manual_seed is not None:
+ torch.cuda.manual_seed(args.manual_seed)
+ torch.cuda.manual_seed_all(args.manual_seed)
+ else:
+ io.cprint('Using CPU')
+
+ if not args.eval:
+ train(args, io)
+ else:
+ test(args, io)
+
diff --git a/zoo/GDANet/part_seg/test.py b/zoo/GDANet/part_seg/test.py
new file mode 100644
index 0000000..d9ea335
--- /dev/null
+++ b/zoo/GDANet/part_seg/test.py
@@ -0,0 +1,258 @@
+from __future__ import print_function
+import os
+import argparse
+import torch
+import torch.optim as optim
+from torch.optim.lr_scheduler import CosineAnnealingLR, StepLR
+from util.data_util import PartNormalDataset, ShapeNetC
+import torch.nn.functional as F
+import torch.nn as nn
+from model.GDANet_ptseg import GDANet
+import numpy as np
+from torch.utils.data import DataLoader
+from util.util import to_categorical, compute_overall_iou, IOStream
+from tqdm import tqdm
+from collections import defaultdict
+from torch.autograd import Variable
+import random
+
+
+classes_str = ['aero','bag','cap','car','chair','ear','guitar','knife','lamp','lapt','moto','mug','Pistol','rock','stake','table']
+
+
+def _init_():
+ if not os.path.exists('checkpoints'):
+ os.makedirs('checkpoints')
+ if not os.path.exists('checkpoints/' + args.exp_name):
+ os.makedirs('checkpoints/' + args.exp_name)
+
+ if not args.eval: # backup the running files
+ os.system('cp main_cls.py checkpoints' + '/' + args.exp_name + '/' + 'main.py.backup')
+ os.system('cp model/GDANet_ptseg.py checkpoints' + '/' + args.exp_name + '/' + 'GDANet_ptseg.py.backup')
+ os.system('cp util.GDANet_util.py checkpoints' + '/' + args.exp_name + '/' + 'GDANet_util.py.backup')
+ os.system('cp util.data_util.py checkpoints' + '/' + args.exp_name + '/' + 'data_util.py.backup')
+
+
+def weight_init(m):
+ if isinstance(m, torch.nn.Linear):
+ torch.nn.init.xavier_normal_(m.weight)
+ if m.bias is not None:
+ torch.nn.init.constant_(m.bias, 0)
+ elif isinstance(m, torch.nn.Conv2d):
+ torch.nn.init.xavier_normal_(m.weight)
+ if m.bias is not None:
+ torch.nn.init.constant_(m.bias, 0)
+ elif isinstance(m, torch.nn.Conv1d):
+ torch.nn.init.xavier_normal_(m.weight)
+ if m.bias is not None:
+ torch.nn.init.constant_(m.bias, 0)
+ elif isinstance(m, torch.nn.BatchNorm2d):
+ torch.nn.init.constant_(m.weight, 1)
+ torch.nn.init.constant_(m.bias, 0)
+ elif isinstance(m, torch.nn.BatchNorm1d):
+ torch.nn.init.constant_(m.weight, 1)
+ torch.nn.init.constant_(m.bias, 0)
+
+
+def test_epoch(test_loader, model, epoch, num_part, num_classes, io):
+ test_loss = 0.0
+ count = 0.0
+ accuracy = []
+ shape_ious = 0.0
+ final_total_per_cat_iou = np.zeros(16).astype(np.float32)
+ final_total_per_cat_seen = np.zeros(16).astype(np.int32)
+ metrics = defaultdict(lambda: list())
+ model.eval()
+
+ # label_size: b, means each sample has one corresponding class
+ for batch_id, (points, label, target) in tqdm(enumerate(test_loader), total=len(test_loader), smoothing=0.9):
+ batch_size, num_point, _ = points.size()
+ points, label, target = Variable(points.float()), Variable(label.long()), Variable(target.long())
+ points = points.transpose(2, 1)
+ points, label, target = points.cuda(non_blocking=True), label.squeeze(1).cuda(non_blocking=True), target.cuda(non_blocking=True)
+ seg_pred = model(points, to_categorical(label, num_classes)) # b,n,50
+
+ # instance iou without considering the class average at each batch_size:
+ batch_shapeious = compute_overall_iou(seg_pred, target, num_part) # [b]
+ # per category iou at each batch_size:
+
+ for shape_idx in range(seg_pred.size(0)): # sample_idx
+ cur_gt_label = label[shape_idx] # label[sample_idx], denotes current sample belongs to which cat
+ final_total_per_cat_iou[cur_gt_label] += batch_shapeious[shape_idx] # add the iou belongs to this cat
+ final_total_per_cat_seen[cur_gt_label] += 1 # count the number of this cat is chosen
+
+ # total iou of current batch in each process:
+ batch_ious = seg_pred.new_tensor([np.sum(batch_shapeious)], dtype=torch.float64) # same device with seg_pred!!!
+
+ # prepare seg_pred and target for later calculating loss and acc:
+ seg_pred = seg_pred.contiguous().view(-1, num_part)
+ target = target.view(-1, 1)[:, 0]
+ # Loss
+ loss = F.nll_loss(seg_pred.contiguous(), target.contiguous())
+
+ # accuracy:
+ pred_choice = seg_pred.data.max(1)[1] # b*n
+ correct = pred_choice.eq(target.data).sum() # torch.int64: total number of correct-predict pts
+
+ loss = torch.mean(loss)
+ shape_ious += batch_ious.item() # count the sum of ious in each iteration
+ count += batch_size # count the total number of samples in each iteration
+ test_loss += loss.item() * batch_size
+ accuracy.append(correct.item() / (batch_size * num_point)) # append the accuracy of each iteration
+
+ for cat_idx in range(16):
+ if final_total_per_cat_seen[cat_idx] > 0: # indicating this cat is included during previous iou appending
+ final_total_per_cat_iou[cat_idx] = final_total_per_cat_iou[cat_idx] / final_total_per_cat_seen[cat_idx] # avg class iou across all samples
+
+ metrics['accuracy'] = np.mean(accuracy)
+ metrics['shape_avg_iou'] = shape_ious * 1.0 / count
+
+ outstr = 'Test %d, loss: %f, test acc: %f test ins_iou: %f' % (epoch + 1, test_loss * 1.0 / count,
+ metrics['accuracy'], metrics['shape_avg_iou'])
+
+ io.cprint(outstr)
+
+ return metrics, final_total_per_cat_iou
+
+
+def test(args, io):
+ # Dataloader
+ # test_data = PartNormalDataset(npoints=2048, split='test', normalize=False)
+ test_data = ShapeNetC(partition='shapenet-c', sub='rotate_4', class_choice=None)
+ print("The number of test data is:%d", len(test_data))
+
+ test_loader = DataLoader(test_data, batch_size=args.test_batch_size, shuffle=False, num_workers=6, drop_last=False)
+
+ # Try to load models
+ num_part = 50
+ device = torch.device("cuda" if args.cuda else "cpu")
+
+ model = GDANet(num_part).to(device)
+ # io.cprint(str(model))
+
+ from collections import OrderedDict
+ state_dict = torch.load("/mnt/lustre/ldkong/models/GDANet/checkpoints/%s/best_%s_model.pth" % (args.exp_name, args.model_type),
+ map_location=torch.device('cpu'))['model']
+
+ new_state_dict = OrderedDict()
+ for layer in state_dict:
+ new_state_dict[layer.replace('module.', '')] = state_dict[layer]
+ model.load_state_dict(new_state_dict)
+
+ model.eval()
+ num_part = 50
+ num_classes = 16
+ metrics = defaultdict(lambda: list())
+ hist_acc = []
+ shape_ious = []
+ total_per_cat_iou = np.zeros((16)).astype(np.float32)
+ total_per_cat_seen = np.zeros((16)).astype(np.int32)
+
+ for batch_id, (points, label, target) in tqdm(enumerate(test_loader), total=len(test_loader), smoothing=0.9):
+ batch_size, num_point, _ = points.size()
+ points, label, target = Variable(points.float()), Variable(label.long()), Variable(target.long())
+ points = points.transpose(2, 1)
+ points, label, target = points.cuda(non_blocking=True), label.squeeze().cuda(non_blocking=True), target.cuda(non_blocking=True)
+
+ with torch.no_grad():
+ seg_pred = model(points, to_categorical(label, num_classes)) # b,n,50
+
+ # instance iou without considering the class average at each batch_size:
+ batch_shapeious = compute_overall_iou(seg_pred, target, num_part) # [b]
+ shape_ious += batch_shapeious # iou +=, equals to .append
+
+ # per category iou at each batch_size:
+ for shape_idx in range(seg_pred.size(0)): # sample_idx
+ cur_gt_label = label[shape_idx] # label[sample_idx]
+ total_per_cat_iou[cur_gt_label] += batch_shapeious[shape_idx]
+ total_per_cat_seen[cur_gt_label] += 1
+
+ # accuracy:
+ seg_pred = seg_pred.contiguous().view(-1, num_part)
+ target = target.view(-1, 1)[:, 0]
+ pred_choice = seg_pred.data.max(1)[1]
+ correct = pred_choice.eq(target.data).cpu().sum()
+ metrics['accuracy'].append(correct.item() / (batch_size * num_point))
+
+ hist_acc += metrics['accuracy']
+ metrics['accuracy'] = np.mean(hist_acc)
+ metrics['shape_avg_iou'] = np.mean(shape_ious)
+ for cat_idx in range(16):
+ if total_per_cat_seen[cat_idx] > 0:
+ total_per_cat_iou[cat_idx] = total_per_cat_iou[cat_idx] / total_per_cat_seen[cat_idx]
+
+ # First we need to calculate the iou of each class and the avg class iou:
+ class_iou = 0
+ for cat_idx in range(16):
+ class_iou += total_per_cat_iou[cat_idx]
+ io.cprint(classes_str[cat_idx] + ' iou: ' + str(total_per_cat_iou[cat_idx])) # print the iou of each class
+ avg_class_iou = class_iou / 16
+ outstr = 'Test :: test acc: %f test class mIOU: %f, test instance mIOU: %f' % (metrics['accuracy'], avg_class_iou, metrics['shape_avg_iou'])
+ io.cprint(outstr)
+
+
+if __name__ == "__main__":
+ # Training settings
+ parser = argparse.ArgumentParser(description='3D Shape Part Segmentation')
+ parser.add_argument('--exp_name', type=str, default='GDANet', metavar='N',
+ help='Name of the experiment')
+ parser.add_argument('--batch_size', type=int, default=64, metavar='batch_size',
+ help='Size of batch)')
+ parser.add_argument('--test_batch_size', type=int, default=16, metavar='batch_size',
+ help='Size of batch)')
+ parser.add_argument('--epochs', type=int, default=350, metavar='N',
+ help='number of episode to train')
+ parser.add_argument('--use_sgd', type=bool, default=False,
+ help='Use SGD')
+ parser.add_argument('--scheduler', type=str, default='step',
+ help='lr scheduler')
+ parser.add_argument('--step', type=int, default=40,
+ help='lr decay step')
+ parser.add_argument('--lr', type=float, default=0.003, metavar='LR',
+ help='learning rate')
+ parser.add_argument('--momentum', type=float, default=0.9, metavar='M',
+ help='SGD momentum (default: 0.9)')
+ parser.add_argument('--no_cuda', type=bool, default=False,
+ help='enables CUDA training')
+ parser.add_argument('--manual_seed', type=int, default=1, metavar='S',
+ help='random seed (default: 1)')
+ parser.add_argument('--eval', type=bool, default=False,
+ help='evaluate the model')
+ parser.add_argument('--num_points', type=int, default=2048,
+ help='num of points to use')
+ parser.add_argument('--resume', type=bool, default=False,
+ help='Resume training or not')
+ parser.add_argument('--model_type', type=str, default='insiou',
+ help='choose to test the best insiou/clsiou/acc model (options: insiou, clsiou, acc)')
+
+ args = parser.parse_args()
+
+ _init_()
+
+ if not args.eval:
+ io = IOStream('checkpoints/' + args.exp_name + '/%s_train.log' % (args.exp_name))
+ else:
+ io = IOStream('checkpoints/' + args.exp_name + '/%s_test.log' % (args.exp_name))
+ io.cprint(str(args))
+
+ if args.manual_seed is not None:
+ random.seed(args.manual_seed)
+ np.random.seed(args.manual_seed)
+ torch.manual_seed(args.manual_seed)
+
+ args.cuda = not args.no_cuda and torch.cuda.is_available()
+
+ if args.cuda:
+ io.cprint('Using GPU')
+ if args.manual_seed is not None:
+ torch.cuda.manual_seed(args.manual_seed)
+ torch.cuda.manual_seed_all(args.manual_seed)
+ else:
+ io.cprint('Using CPU')
+
+ if not args.eval:
+ # train(args, io)
+ pass
+ else:
+ test(args, io)
+
diff --git a/zoo/GDANet/part_seg/test.sh b/zoo/GDANet/part_seg/test.sh
new file mode 100644
index 0000000..511ee85
--- /dev/null
+++ b/zoo/GDANet/part_seg/test.sh
@@ -0,0 +1,5 @@
+CUDA_VISIBLE_DEVICES=3 python test.py \
+ --eval True \
+ --exp_name robustnesstest_GDANet \
+ --model_type insiou \
+ --test_batch_size 16
\ No newline at end of file
diff --git a/zoo/GDANet/part_seg/train.sh b/zoo/GDANet/part_seg/train.sh
new file mode 100644
index 0000000..b72b60e
--- /dev/null
+++ b/zoo/GDANet/part_seg/train.sh
@@ -0,0 +1,2 @@
+CUDA_VISIBLE_DEVICES=0,2 python main_ptseg.py \
+ --exp_name robustnesstest_GDANet
\ No newline at end of file
diff --git a/zoo/GDANet/provider.py b/zoo/GDANet/provider.py
new file mode 100644
index 0000000..e51c73f
--- /dev/null
+++ b/zoo/GDANet/provider.py
@@ -0,0 +1,466 @@
+'''
+RSMix:
+@Author: Dogyoon Lee
+@Contact: dogyoonlee@gmail.com
+@File: provider.py
+@Time: 2020/11/23 13:46 PM
+'''
+
+
+import os
+import sys
+import numpy as np
+import h5py
+# import tensorflow as tf
+import random
+
+BASE_DIR = os.path.dirname(os.path.abspath(__file__))
+sys.path.append(BASE_DIR)
+
+# def set_random_seed(seed=1):
+# # set random_seed
+# random.seed(seed)
+# np.random.seed(seed)
+# tf.set_random_seed(seed)
+
+def shuffle_data(data, labels):
+ """ Shuffle data and labels.
+ Input:
+ data: B,N,... numpy array
+ label: B,... numpy array
+ Return:
+ shuffled data, label and shuffle indices
+ """
+ idx = np.arange(len(labels))
+ np.random.shuffle(idx)
+ return data[idx, ...], labels[idx], idx
+
+def shuffle_points(batch_data):
+ """ Shuffle orders of points in each point cloud -- changes FPS behavior.
+ Use the same shuffling idx for the entire batch.
+ Input:
+ BxNxC array
+ Output:
+ BxNxC array
+ """
+ idx = np.arange(batch_data.shape[1])
+ np.random.shuffle(idx)
+ return batch_data[:,idx,:]
+
+def rotate_point_cloud(batch_data):
+ """ Randomly rotate the point clouds to augument the dataset
+ rotation is per shape based along up direction
+ Input:
+ BxNx3 array, original batch of point clouds
+ Return:
+ BxNx3 array, rotated batch of point clouds
+ """
+ rotated_data = np.zeros(batch_data.shape, dtype=np.float32)
+ for k in range(batch_data.shape[0]):
+ rotation_angle = np.random.uniform() * 2 * np.pi
+ cosval = np.cos(rotation_angle)
+ sinval = np.sin(rotation_angle)
+ rotation_matrix = np.array([[cosval, 0, sinval],
+ [0, 1, 0],
+ [-sinval, 0, cosval]])
+ shape_pc = batch_data[k, ...]
+ rotated_data[k, ...] = np.dot(shape_pc.reshape((-1, 3)), rotation_matrix)
+ return rotated_data
+
+def rotate_point_cloud_z(batch_data):
+ """ Randomly rotate the point clouds to augument the dataset
+ rotation is per shape based along up direction
+ Input:
+ BxNx3 array, original batch of point clouds
+ Return:
+ BxNx3 array, rotated batch of point clouds
+ """
+ rotated_data = np.zeros(batch_data.shape, dtype=np.float32)
+ for k in range(batch_data.shape[0]):
+ rotation_angle = np.random.uniform() * 2 * np.pi
+ cosval = np.cos(rotation_angle)
+ sinval = np.sin(rotation_angle)
+ rotation_matrix = np.array([[cosval, sinval, 0],
+ [-sinval, cosval, 0],
+ [0, 0, 1]])
+ shape_pc = batch_data[k, ...]
+ rotated_data[k, ...] = np.dot(shape_pc.reshape((-1, 3)), rotation_matrix)
+ return rotated_data
+
+def rotate_point_cloud_with_normal(batch_xyz_normal):
+ ''' Randomly rotate XYZ, normal point cloud.
+ Input:
+ batch_xyz_normal: B,N,6, first three channels are XYZ, last 3 all normal
+ Output:
+ B,N,6, rotated XYZ, normal point cloud
+ '''
+ for k in range(batch_xyz_normal.shape[0]):
+ rotation_angle = np.random.uniform() * 2 * np.pi
+ cosval = np.cos(rotation_angle)
+ sinval = np.sin(rotation_angle)
+ rotation_matrix = np.array([[cosval, 0, sinval],
+ [0, 1, 0],
+ [-sinval, 0, cosval]])
+ shape_pc = batch_xyz_normal[k,:,0:3]
+ shape_normal = batch_xyz_normal[k,:,3:6]
+ batch_xyz_normal[k,:,0:3] = np.dot(shape_pc.reshape((-1, 3)), rotation_matrix)
+ batch_xyz_normal[k,:,3:6] = np.dot(shape_normal.reshape((-1, 3)), rotation_matrix)
+ return batch_xyz_normal
+
+def rotate_perturbation_point_cloud_with_normal(batch_data, angle_sigma=0.06, angle_clip=0.18):
+ """ Randomly perturb the point clouds by small rotations
+ Input:
+ BxNx6 array, original batch of point clouds and point normals
+ Return:
+ BxNx3 array, rotated batch of point clouds
+ """
+ rotated_data = np.zeros(batch_data.shape, dtype=np.float32)
+ for k in list(range(batch_data.shape[0])):
+ angles = np.clip(angle_sigma*np.random.randn(3), -angle_clip, angle_clip)
+ Rx = np.array([[1,0,0],
+ [0,np.cos(angles[0]),-np.sin(angles[0])],
+ [0,np.sin(angles[0]),np.cos(angles[0])]])
+ Ry = np.array([[np.cos(angles[1]),0,np.sin(angles[1])],
+ [0,1,0],
+ [-np.sin(angles[1]),0,np.cos(angles[1])]])
+ Rz = np.array([[np.cos(angles[2]),-np.sin(angles[2]),0],
+ [np.sin(angles[2]),np.cos(angles[2]),0],
+ [0,0,1]])
+ R = np.dot(Rz, np.dot(Ry,Rx))
+ shape_pc = batch_data[k,:,0:3]
+ shape_normal = batch_data[k,:,3:6]
+ rotated_data[k,:,0:3] = np.dot(shape_pc.reshape((-1, 3)), R)
+ rotated_data[k,:,3:6] = np.dot(shape_normal.reshape((-1, 3)), R)
+ return rotated_data
+
+
+def rotate_point_cloud_by_angle(batch_data, rotation_angle):
+ """ Rotate the point cloud along up direction with certain angle.
+ Input:
+ BxNx3 array, original batch of point clouds
+ Return:
+ BxNx3 array, rotated batch of point clouds
+ """
+ rotated_data = np.zeros(batch_data.shape, dtype=np.float32)
+ for k in list(range(batch_data.shape[0])):
+ #rotation_angle = np.random.uniform() * 2 * np.pi
+ cosval = np.cos(rotation_angle)
+ sinval = np.sin(rotation_angle)
+ rotation_matrix = np.array([[cosval, 0, sinval],
+ [0, 1, 0],
+ [-sinval, 0, cosval]])
+ shape_pc = batch_data[k,:,0:3]
+ rotated_data[k,:,0:3] = np.dot(shape_pc.reshape((-1, 3)), rotation_matrix)
+ return rotated_data
+
+def rotate_point_cloud_by_angle_with_normal(batch_data, rotation_angle):
+ """ Rotate the point cloud along up direction with certain angle.
+ Input:
+ BxNx6 array, original batch of point clouds with normal
+ scalar, angle of rotation
+ Return:
+ BxNx6 array, rotated batch of point clouds iwth normal
+ """
+ rotated_data = np.zeros(batch_data.shape, dtype=np.float32)
+ for k in range(batch_data.shape[0]):
+ #rotation_angle = np.random.uniform() * 2 * np.pi
+ cosval = np.cos(rotation_angle)
+ sinval = np.sin(rotation_angle)
+ rotation_matrix = np.array([[cosval, 0, sinval],
+ [0, 1, 0],
+ [-sinval, 0, cosval]])
+ shape_pc = batch_data[k,:,0:3]
+ shape_normal = batch_data[k,:,3:6]
+ rotated_data[k,:,0:3] = np.dot(shape_pc.reshape((-1, 3)), rotation_matrix)
+ rotated_data[k,:,3:6] = np.dot(shape_normal.reshape((-1,3)), rotation_matrix)
+ return rotated_data
+
+
+
+def rotate_perturbation_point_cloud(batch_data, angle_sigma=0.06, angle_clip=0.18):
+ """ Randomly perturb the point clouds by small rotations
+ Input:
+ BxNx3 array, original batch of point clouds
+ Return:
+ BxNx3 array, rotated batch of point clouds
+ """
+ rotated_data = np.zeros(batch_data.shape, dtype=np.float32)
+ for k in range(batch_data.shape[0]):
+ angles = np.clip(angle_sigma*np.random.randn(3), -angle_clip, angle_clip)
+ Rx = np.array([[1,0,0],
+ [0,np.cos(angles[0]),-np.sin(angles[0])],
+ [0,np.sin(angles[0]),np.cos(angles[0])]])
+ Ry = np.array([[np.cos(angles[1]),0,np.sin(angles[1])],
+ [0,1,0],
+ [-np.sin(angles[1]),0,np.cos(angles[1])]])
+ Rz = np.array([[np.cos(angles[2]),-np.sin(angles[2]),0],
+ [np.sin(angles[2]),np.cos(angles[2]),0],
+ [0,0,1]])
+ R = np.dot(Rz, np.dot(Ry,Rx))
+ shape_pc = batch_data[k, ...]
+ rotated_data[k, ...] = np.dot(shape_pc.reshape((-1, 3)), R)
+ return rotated_data
+
+
+# def jitter_point_cloud(batch_data, sigma=0.01, clip=0.05):
+def jitter_point_cloud(batch_data, sigma=0.01, clip=0.02):
+ """ Randomly jitter points. jittering is per point.
+ Input:
+ BxNx3 array, original batch of point clouds
+ Return:
+ BxNx3 array, jittered batch of point clouds
+ """
+ B, N, C = batch_data.shape
+ assert(clip > 0)
+ jittered_data = np.clip(sigma * np.random.randn(B, N, C), -1*clip, clip)
+ jittered_data += batch_data
+ return jittered_data
+
+# def shift_point_cloud(batch_data, shift_range=0.1):
+def shift_point_cloud(batch_data, shift_range=0.2):
+ """ Randomly shift point cloud. Shift is per point cloud.
+ Input:
+ BxNx3 array, original batch of point clouds
+ Return:
+ BxNx3 array, shifted batch of point clouds
+ """
+ B, N, C = batch_data.shape
+ shifts = np.random.uniform(-shift_range, shift_range, (B,3))
+ for batch_index in range(B):
+ batch_data[batch_index,:,:] += shifts[batch_index,:]
+ return batch_data
+
+
+# def random_scale_point_cloud(batch_data, scale_low=0.8, scale_high=1.25):
+def random_scale_point_cloud(batch_data, scale_low=2./3., scale_high=3./2.):
+ """ Randomly scale the point cloud. Scale is per point cloud.
+ Input:
+ BxNx3 array, original batch of point clouds
+ Return:
+ BxNx3 array, scaled batch of point clouds
+ """
+ B, N, C = batch_data.shape
+ scales = np.random.uniform(scale_low, scale_high, B)
+ for batch_index in range(B):
+ batch_data[batch_index,:,:] *= scales[batch_index]
+ return batch_data
+
+def random_point_dropout(batch_pc, max_dropout_ratio=0.875):
+ ''' batch_pc: BxNx3 '''
+ for b in range(batch_pc.shape[0]):
+ dropout_ratio = np.random.random()*max_dropout_ratio # 0~0.875
+ drop_idx = np.where(np.random.random((batch_pc.shape[1]))<=dropout_ratio)[0]
+ if len(drop_idx)>0:
+ batch_pc[b,drop_idx,:] = batch_pc[b,0,:] # set to the first point
+ return batch_pc
+
+
+def getDataFiles(list_filename):
+ return [line.rstrip() for line in open(list_filename)]
+
+def load_h5(h5_filename):
+ f = h5py.File(h5_filename)
+ data = f['data'][:]
+ label = f['label'][:]
+ return (data, label)
+
+def loadDataFile(filename):
+ return load_h5(filename)
+
+
+# for rsmix @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
+def knn_points(k, xyz, query, nsample=512):
+ B, N, C = xyz.shape
+ _, S, _ = query.shape # S=1
+
+ tmp_idx = np.arange(N)
+ group_idx = np.repeat(tmp_idx[np.newaxis,np.newaxis,:], B, axis=0)
+ sqrdists = square_distance(query, xyz) # Bx1,N #μ 곱거리
+ tmp = np.sort(sqrdists, axis=2)
+ knn_dist = np.zeros((B,1))
+ for i in range(B):
+ knn_dist[i][0] = tmp[i][0][k]
+ group_idx[i][sqrdists[i]>knn_dist[i][0]]=N
+ # group_idx[sqrdists > radius ** 2] = N
+ # print("group idx : \n",group_idx)
+ # group_idx = group_idx.sort(dim=-1)[0][:, :, :nsample] # for torch.tensor
+ group_idx = np.sort(group_idx, axis=2)[:, :, :nsample]
+ # group_first = group_idx[:, :, 0].view(B, S, 1).repeat([1, 1, nsample])
+ tmp_idx = group_idx[:,:,0]
+ group_first = np.repeat(tmp_idx[:,np.newaxis,:], nsample, axis=2)
+ # repeat the first value of the idx in each batch
+ mask = group_idx == N
+ group_idx[mask] = group_first[mask]
+ return group_idx
+
+def cut_points_knn(data_batch, idx, radius, nsample=512, k=512):
+ """
+ input
+ points : BxNx3(=6 with normal)
+ idx : Bx1 one scalar(int) between 0~len(points)
+
+ output
+ idx : Bxn_sample
+ """
+ B, N, C = data_batch.shape
+ B, S = idx.shape
+ query_points = np.zeros((B,1,C))
+ # print("idx : \n",idx)
+ for i in range(B):
+ query_points[i][0]=data_batch[i][idx[i][0]] # Bx1x3(=6 with normal)
+ # B x n_sample
+ group_idx = knn_points(k=k, xyz=data_batch[:,:,:3], query=query_points[:,:,:3], nsample=nsample)
+ return group_idx, query_points # group_idx: 16x?x6, query_points: 16x1x6
+
+def cut_points(data_batch, idx, radius, nsample=512):
+ """
+ input
+ points : BxNx3(=6 with normal)
+ idx : Bx1 one scalar(int) between 0~len(points)
+
+ output
+ idx : Bxn_sample
+ """
+ B, N, C = data_batch.shape
+ B, S = idx.shape
+ query_points = np.zeros((B,1,C))
+ # print("idx : \n",idx)
+ for i in range(B):
+ query_points[i][0]=data_batch[i][idx[i][0]] # Bx1x3(=6 with normal)
+ # B x n_sample
+ group_idx = query_ball_point_for_rsmix(radius, nsample, data_batch[:,:,:3], query_points[:,:,:3])
+ return group_idx, query_points # group_idx: 16x?x6, query_points: 16x1x6
+
+
+def query_ball_point_for_rsmix(radius, nsample, xyz, new_xyz):
+ """
+ Input:
+ radius: local region radius
+ nsample: max sample number in local region
+ xyz: all points, [B, N, 3]
+ new_xyz: query points, [B, S, 3]
+ Return:
+ group_idx: grouped points index, [B, S, nsample], S=1
+ """
+ # device = xyz.device
+ B, N, C = xyz.shape
+ _, S, _ = new_xyz.shape
+ # group_idx = torch.arange(N, dtype=torch.long).to(device).view(1, 1, N).repeat([B, S, 1])
+ tmp_idx = np.arange(N)
+ group_idx = np.repeat(tmp_idx[np.newaxis,np.newaxis,:], B, axis=0)
+
+ sqrdists = square_distance(new_xyz, xyz)
+ group_idx[sqrdists > radius ** 2] = N
+
+ # group_idx = group_idx.sort(dim=-1)[0][:, :, :nsample] # for torch.tensor
+ group_idx = np.sort(group_idx, axis=2)[:, :, :nsample]
+ # group_first = group_idx[:, :, 0].view(B, S, 1).repeat([1, 1, nsample])
+ tmp_idx = group_idx[:,:,0]
+ group_first = np.repeat(tmp_idx[:,np.newaxis,:], nsample, axis=2)
+ # repeat the first value of the idx in each batch
+ mask = group_idx == N
+ group_idx[mask] = group_first[mask]
+ return group_idx
+
+def square_distance(src, dst):
+ """
+ Calculate Euclid distance between each two points.
+
+ src^T * dst = xn * xm + yn * ym + zn * zmοΌ
+ sum(src^2, dim=-1) = xn*xn + yn*yn + zn*zn;
+ sum(dst^2, dim=-1) = xm*xm + ym*ym + zm*zm;
+ dist = (xn - xm)^2 + (yn - ym)^2 + (zn - zm)^2
+ = sum(src**2,dim=-1)+sum(dst**2,dim=-1)-2*src^T*dst
+
+ Input:
+ src: source points, [B, N, C]
+ dst: target points, [B, M, C]
+ Output:
+ dist: per-point square distance, [B, N, M]
+ """
+ B, N, _ = src.shape
+ _, M, _ = dst.shape
+ # dist = -2 * torch.matmul(src, dst.permute(0, 2, 1))
+ # dist += torch.sum(src ** 2, -1).view(B, N, 1)
+ # dist += torch.sum(dst ** 2, -1).view(B, 1, M)
+
+ dist = -2 * np.matmul(src, dst.transpose(0, 2, 1))
+ dist += np.sum(src ** 2, -1).reshape(B, N, 1)
+ dist += np.sum(dst ** 2, -1).reshape(B, 1, M)
+
+ return dist
+
+
+def pts_num_ctrl(pts_erase_idx, pts_add_idx):
+ '''
+ input : pts - to erase
+ pts - to add
+ output :pts - to add (number controled)
+ '''
+ if len(pts_erase_idx)>=len(pts_add_idx):
+ num_diff = len(pts_erase_idx)-len(pts_add_idx)
+ if num_diff == 0:
+ pts_add_idx_ctrled = pts_add_idx
+ else:
+ pts_add_idx_ctrled = np.append(pts_add_idx, pts_add_idx[np.random.randint(0, len(pts_add_idx), size=num_diff)])
+ else:
+ pts_add_idx_ctrled = np.sort(np.random.choice(pts_add_idx, size=len(pts_erase_idx), replace=False))
+ return pts_add_idx_ctrled
+
+def rsmix(data_batch, label_batch, beta=1.0, n_sample=512, KNN=False):
+ cut_rad = np.random.beta(beta, beta)
+ rand_index = np.random.choice(data_batch.shape[0],data_batch.shape[0], replace=False) # label dim : (16,) for model
+
+ if len(label_batch.shape) is 1:
+ label_batch = np.expand_dims(label_batch, axis=1)
+
+ label_a = label_batch[:,0]
+ label_b = label_batch[rand_index][:,0]
+
+ data_batch_rand = data_batch[rand_index] # BxNx3(with normal=6)
+ rand_idx_1 = np.random.randint(0,data_batch.shape[1], (data_batch.shape[0],1))
+ rand_idx_2 = np.random.randint(0,data_batch.shape[1], (data_batch.shape[0],1))
+ if KNN:
+ knn_para = min(int(np.ceil(cut_rad*n_sample)),n_sample)
+ pts_erase_idx, query_point_1 = cut_points_knn(data_batch, rand_idx_1, cut_rad, nsample=n_sample, k=knn_para) # B x num_points_in_radius_1 x 3(or 6)
+ pts_add_idx, query_point_2 = cut_points_knn(data_batch_rand, rand_idx_2, cut_rad, nsample=n_sample, k=knn_para) # B x num_points_in_radius_2 x 3(or 6)
+ else:
+ pts_erase_idx, query_point_1 = cut_points(data_batch, rand_idx_1, cut_rad, nsample=n_sample) # B x num_points_in_radius_1 x 3(or 6)
+ pts_add_idx, query_point_2 = cut_points(data_batch_rand, rand_idx_2, cut_rad, nsample=n_sample) # B x num_points_in_radius_2 x 3(or 6)
+
+ query_dist = query_point_1[:,:,:3] - query_point_2[:,:,:3]
+
+ pts_replaced = np.zeros((1,data_batch.shape[1],data_batch.shape[2]))
+ lam = np.zeros(data_batch.shape[0],dtype=float)
+
+ for i in range(data_batch.shape[0]):
+ if pts_erase_idx[i][0][0]==data_batch.shape[1]:
+ tmp_pts_replaced = np.expand_dims(data_batch[i], axis=0)
+ lam_tmp = 0
+ elif pts_add_idx[i][0][0]==data_batch.shape[1]:
+ pts_erase_idx_tmp = np.unique(pts_erase_idx[i].reshape(n_sample,),axis=0)
+ tmp_pts_erased = np.delete(data_batch[i], pts_erase_idx_tmp, axis=0) # B x N-num_rad_1 x 3(or 6)
+ dup_points_idx = np.random.randint(0,len(tmp_pts_erased), size=len(pts_erase_idx_tmp))
+ tmp_pts_replaced = np.expand_dims(np.concatenate((tmp_pts_erased, data_batch[i][dup_points_idx]), axis=0), axis=0)
+ lam_tmp = 0
+ else:
+ pts_erase_idx_tmp = np.unique(pts_erase_idx[i].reshape(n_sample,),axis=0)
+ pts_add_idx_tmp = np.unique(pts_add_idx[i].reshape(n_sample,),axis=0)
+ pts_add_idx_ctrled_tmp = pts_num_ctrl(pts_erase_idx_tmp,pts_add_idx_tmp)
+ tmp_pts_erased = np.delete(data_batch[i], pts_erase_idx_tmp, axis=0) # B x N-num_rad_1 x 3(or 6)
+ # input("INPUT : ")
+ tmp_pts_to_add = np.take(data_batch_rand[i], pts_add_idx_ctrled_tmp, axis=0)
+ tmp_pts_to_add[:,:3] = query_dist[i]+tmp_pts_to_add[:,:3]
+
+ tmp_pts_replaced = np.expand_dims(np.vstack((tmp_pts_erased,tmp_pts_to_add)), axis=0)
+
+ lam_tmp = len(pts_add_idx_ctrled_tmp)/(len(pts_add_idx_ctrled_tmp)+len(tmp_pts_erased))
+
+ pts_replaced = np.concatenate((pts_replaced, tmp_pts_replaced),axis=0)
+ lam[i] = lam_tmp
+
+ data_batch_mixed = np.delete(pts_replaced, [0], axis=0)
+
+
+ return data_batch_mixed, lam, label_a, label_b
+
diff --git a/zoo/GDANet/rsmix_provider.py b/zoo/GDANet/rsmix_provider.py
new file mode 100644
index 0000000..1e0091b
--- /dev/null
+++ b/zoo/GDANet/rsmix_provider.py
@@ -0,0 +1,208 @@
+
+import os
+import sys
+import numpy as np
+import h5py
+# import tensorflow as tf
+import random
+
+BASE_DIR = os.path.dirname(os.path.abspath(__file__))
+sys.path.append(BASE_DIR)
+
+
+# for rsmix @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
+def knn_points(k, xyz, query, nsample=512):
+ B, N, C = xyz.shape
+ _, S, _ = query.shape # S=1
+
+ tmp_idx = np.arange(N)
+ group_idx = np.repeat(tmp_idx[np.newaxis,np.newaxis,:], B, axis=0)
+ sqrdists = square_distance(query, xyz) # Bx1,N #μ 곱거리
+ tmp = np.sort(sqrdists, axis=2)
+ knn_dist = np.zeros((B,1))
+ for i in range(B):
+ knn_dist[i][0] = tmp[i][0][k]
+ group_idx[i][sqrdists[i]>knn_dist[i][0]]=N
+ # group_idx[sqrdists > radius ** 2] = N
+ # print("group idx : \n",group_idx)
+ # group_idx = group_idx.sort(dim=-1)[0][:, :, :nsample] # for torch.tensor
+ group_idx = np.sort(group_idx, axis=2)[:, :, :nsample]
+ # group_first = group_idx[:, :, 0].view(B, S, 1).repeat([1, 1, nsample])
+ tmp_idx = group_idx[:,:,0]
+ group_first = np.repeat(tmp_idx[:,np.newaxis,:], nsample, axis=2)
+ # repeat the first value of the idx in each batch
+ mask = group_idx == N
+ group_idx[mask] = group_first[mask]
+ return group_idx
+
+def cut_points_knn(data_batch, idx, radius, nsample=512, k=512):
+ """
+ input
+ points : BxNx3(=6 with normal)
+ idx : Bx1 one scalar(int) between 0~len(points)
+
+ output
+ idx : Bxn_sample
+ """
+ B, N, C = data_batch.shape
+ B, S = idx.shape
+ query_points = np.zeros((B,1,C))
+ # print("idx : \n",idx)
+ for i in range(B):
+ query_points[i][0]=data_batch[i][idx[i][0]] # Bx1x3(=6 with normal)
+ # B x n_sample
+ group_idx = knn_points(k=k, xyz=data_batch[:,:,:3], query=query_points[:,:,:3], nsample=nsample)
+ return group_idx, query_points # group_idx: 16x?x6, query_points: 16x1x6
+
+def cut_points(data_batch, idx, radius, nsample=512):
+ """
+ input
+ points : BxNx3(=6 with normal)
+ idx : Bx1 one scalar(int) between 0~len(points)
+
+ output
+ idx : Bxn_sample
+ """
+ B, N, C = data_batch.shape
+ B, S = idx.shape
+ query_points = np.zeros((B,1,C))
+ # print("idx : \n",idx)
+ for i in range(B):
+ query_points[i][0]=data_batch[i][idx[i][0]] # Bx1x3(=6 with normal)
+ # B x n_sample
+ group_idx = query_ball_point_for_rsmix(radius, nsample, data_batch[:,:,:3], query_points[:,:,:3])
+ return group_idx, query_points # group_idx: 16x?x6, query_points: 16x1x6
+
+
+def query_ball_point_for_rsmix(radius, nsample, xyz, new_xyz):
+ """
+ Input:
+ radius: local region radius
+ nsample: max sample number in local region
+ xyz: all points, [B, N, 3]
+ new_xyz: query points, [B, S, 3]
+ Return:
+ group_idx: grouped points index, [B, S, nsample], S=1
+ """
+ # device = xyz.device
+ B, N, C = xyz.shape
+ _, S, _ = new_xyz.shape
+ # group_idx = torch.arange(N, dtype=torch.long).to(device).view(1, 1, N).repeat([B, S, 1])
+ tmp_idx = np.arange(N)
+ group_idx = np.repeat(tmp_idx[np.newaxis,np.newaxis,:], B, axis=0)
+
+ sqrdists = square_distance(new_xyz, xyz)
+ group_idx[sqrdists > radius ** 2] = N
+
+ # group_idx = group_idx.sort(dim=-1)[0][:, :, :nsample] # for torch.tensor
+ group_idx = np.sort(group_idx, axis=2)[:, :, :nsample]
+ # group_first = group_idx[:, :, 0].view(B, S, 1).repeat([1, 1, nsample])
+ tmp_idx = group_idx[:,:,0]
+ group_first = np.repeat(tmp_idx[:,np.newaxis,:], nsample, axis=2)
+ # repeat the first value of the idx in each batch
+ mask = group_idx == N
+ group_idx[mask] = group_first[mask]
+ return group_idx
+
+def square_distance(src, dst):
+ """
+ Calculate Euclid distance between each two points.
+
+ src^T * dst = xn * xm + yn * ym + zn * zmοΌ
+ sum(src^2, dim=-1) = xn*xn + yn*yn + zn*zn;
+ sum(dst^2, dim=-1) = xm*xm + ym*ym + zm*zm;
+ dist = (xn - xm)^2 + (yn - ym)^2 + (zn - zm)^2
+ = sum(src**2,dim=-1)+sum(dst**2,dim=-1)-2*src^T*dst
+
+ Input:
+ src: source points, [B, N, C]
+ dst: target points, [B, M, C]
+ Output:
+ dist: per-point square distance, [B, N, M]
+ """
+ B, N, _ = src.shape
+ _, M, _ = dst.shape
+ # dist = -2 * torch.matmul(src, dst.permute(0, 2, 1))
+ # dist += torch.sum(src ** 2, -1).view(B, N, 1)
+ # dist += torch.sum(dst ** 2, -1).view(B, 1, M)
+
+ dist = -2 * np.matmul(src, dst.transpose(0, 2, 1))
+ dist += np.sum(src ** 2, -1).reshape(B, N, 1)
+ dist += np.sum(dst ** 2, -1).reshape(B, 1, M)
+
+ return dist
+
+
+def pts_num_ctrl(pts_erase_idx, pts_add_idx):
+ '''
+ input : pts - to erase
+ pts - to add
+ output :pts - to add (number controled)
+ '''
+ if len(pts_erase_idx)>=len(pts_add_idx):
+ num_diff = len(pts_erase_idx)-len(pts_add_idx)
+ if num_diff == 0:
+ pts_add_idx_ctrled = pts_add_idx
+ else:
+ pts_add_idx_ctrled = np.append(pts_add_idx, pts_add_idx[np.random.randint(0, len(pts_add_idx), size=num_diff)])
+ else:
+ pts_add_idx_ctrled = np.sort(np.random.choice(pts_add_idx, size=len(pts_erase_idx), replace=False))
+ return pts_add_idx_ctrled
+
+def rsmix(data_batch, label_batch, beta=1.0, n_sample=512, KNN=False):
+ cut_rad = np.random.beta(beta, beta)
+ rand_index = np.random.choice(data_batch.shape[0],data_batch.shape[0], replace=False) # label dim : (16,) for model
+
+ if len(label_batch.shape) is 1:
+ label_batch = np.expand_dims(label_batch, axis=1)
+
+ label_a = label_batch[:,0]
+ label_b = label_batch[rand_index][:,0]
+
+ data_batch_rand = data_batch[rand_index] # BxNx3(with normal=6)
+ rand_idx_1 = np.random.randint(0,data_batch.shape[1], (data_batch.shape[0],1))
+ rand_idx_2 = np.random.randint(0,data_batch.shape[1], (data_batch.shape[0],1))
+ if KNN:
+ knn_para = min(int(np.ceil(cut_rad*n_sample)),n_sample)
+ pts_erase_idx, query_point_1 = cut_points_knn(data_batch, rand_idx_1, cut_rad, nsample=n_sample, k=knn_para) # B x num_points_in_radius_1 x 3(or 6)
+ pts_add_idx, query_point_2 = cut_points_knn(data_batch_rand, rand_idx_2, cut_rad, nsample=n_sample, k=knn_para) # B x num_points_in_radius_2 x 3(or 6)
+ else:
+ pts_erase_idx, query_point_1 = cut_points(data_batch, rand_idx_1, cut_rad, nsample=n_sample) # B x num_points_in_radius_1 x 3(or 6)
+ pts_add_idx, query_point_2 = cut_points(data_batch_rand, rand_idx_2, cut_rad, nsample=n_sample) # B x num_points_in_radius_2 x 3(or 6)
+
+ query_dist = query_point_1[:,:,:3] - query_point_2[:,:,:3]
+
+ pts_replaced = np.zeros((1,data_batch.shape[1],data_batch.shape[2]))
+ lam = np.zeros(data_batch.shape[0],dtype=float)
+
+ for i in range(data_batch.shape[0]):
+ if pts_erase_idx[i][0][0]==data_batch.shape[1]:
+ tmp_pts_replaced = np.expand_dims(data_batch[i], axis=0)
+ lam_tmp = 0
+ elif pts_add_idx[i][0][0]==data_batch.shape[1]:
+ pts_erase_idx_tmp = np.unique(pts_erase_idx[i].reshape(n_sample,),axis=0)
+ tmp_pts_erased = np.delete(data_batch[i], pts_erase_idx_tmp, axis=0) # B x N-num_rad_1 x 3(or 6)
+ dup_points_idx = np.random.randint(0,len(tmp_pts_erased), size=len(pts_erase_idx_tmp))
+ tmp_pts_replaced = np.expand_dims(np.concatenate((tmp_pts_erased, data_batch[i][dup_points_idx]), axis=0), axis=0)
+ lam_tmp = 0
+ else:
+ pts_erase_idx_tmp = np.unique(pts_erase_idx[i].reshape(n_sample,),axis=0)
+ pts_add_idx_tmp = np.unique(pts_add_idx[i].reshape(n_sample,),axis=0)
+ pts_add_idx_ctrled_tmp = pts_num_ctrl(pts_erase_idx_tmp,pts_add_idx_tmp)
+ tmp_pts_erased = np.delete(data_batch[i], pts_erase_idx_tmp, axis=0) # B x N-num_rad_1 x 3(or 6)
+ # input("INPUT : ")
+ tmp_pts_to_add = np.take(data_batch_rand[i], pts_add_idx_ctrled_tmp, axis=0)
+ tmp_pts_to_add[:,:3] = query_dist[i]+tmp_pts_to_add[:,:3]
+
+ tmp_pts_replaced = np.expand_dims(np.vstack((tmp_pts_erased,tmp_pts_to_add)), axis=0)
+
+ lam_tmp = len(pts_add_idx_ctrled_tmp)/(len(pts_add_idx_ctrled_tmp)+len(tmp_pts_erased))
+
+ pts_replaced = np.concatenate((pts_replaced, tmp_pts_replaced),axis=0)
+ lam[i] = lam_tmp
+
+ data_batch_mixed = np.delete(pts_replaced, [0], axis=0)
+
+
+ return data_batch_mixed, lam, label_a, label_b
+
diff --git a/zoo/GDANet/util/GDANet_util.py b/zoo/GDANet/util/GDANet_util.py
new file mode 100755
index 0000000..5b8688e
--- /dev/null
+++ b/zoo/GDANet/util/GDANet_util.py
@@ -0,0 +1,212 @@
+import torch
+from torch import nn
+
+
+def knn(x, k):
+ inner = -2*torch.matmul(x.transpose(2, 1), x)
+ xx = torch.sum(x**2, dim=1, keepdim=True)
+ pairwise_distance = -xx - inner - xx.transpose(2, 1)
+
+ idx = pairwise_distance.topk(k=k, dim=-1)[1] # (batch_size, num_points, k)
+ return idx, pairwise_distance
+
+
+def local_operator(x, k):
+ batch_size = x.size(0)
+ num_points = x.size(2)
+ x = x.view(batch_size, -1, num_points)
+ idx, _ = knn(x, k=k)
+ device = torch.device('cuda')
+
+ idx_base = torch.arange(0, batch_size, device=device).view(-1, 1, 1) * num_points
+
+ idx = idx + idx_base
+
+ idx = idx.view(-1)
+
+ _, num_dims, _ = x.size()
+
+ x = x.transpose(2, 1).contiguous()
+
+ neighbor = x.view(batch_size * num_points, -1)[idx, :]
+
+ neighbor = neighbor.view(batch_size, num_points, k, num_dims)
+
+ x = x.view(batch_size, num_points, 1, num_dims).repeat(1, 1, k, 1)
+
+ feature = torch.cat((neighbor-x, neighbor), dim=3).permute(0, 3, 1, 2) # local and global all in
+
+ return feature
+
+
+def local_operator_withnorm(x, norm_plt, k):
+ batch_size = x.size(0)
+ num_points = x.size(2)
+ x = x.view(batch_size, -1, num_points)
+ norm_plt = norm_plt.view(batch_size, -1, num_points)
+ idx, _ = knn(x, k=k) # (batch_size, num_points, k)
+ device = torch.device('cuda')
+
+ idx_base = torch.arange(0, batch_size, device=device).view(-1, 1, 1) * num_points
+
+ idx = idx + idx_base
+
+ idx = idx.view(-1)
+
+ _, num_dims, _ = x.size()
+
+ x = x.transpose(2, 1).contiguous()
+ norm_plt = norm_plt.transpose(2, 1).contiguous()
+
+ neighbor = x.view(batch_size * num_points, -1)[idx, :]
+ neighbor_norm = norm_plt.view(batch_size * num_points, -1)[idx, :]
+
+ neighbor = neighbor.view(batch_size, num_points, k, num_dims)
+ neighbor_norm = neighbor_norm.view(batch_size, num_points, k, num_dims)
+
+ x = x.view(batch_size, num_points, 1, num_dims).repeat(1, 1, k, 1)
+
+ feature = torch.cat((neighbor-x, neighbor, neighbor_norm), dim=3).permute(0, 3, 1, 2) # 3c
+
+ return feature
+
+
+def GDM(x, M):
+ """
+ Geometry-Disentangle Module
+ M: number of disentangled points in both sharp and gentle variation components
+ """
+ k = 64 # number of neighbors to decide the range of j in Eq.(5)
+ tau = 0.2 # threshold in Eq.(2)
+ sigma = 2 # parameters of f (Gaussian function in Eq.(2))
+ ###############
+ """Graph Construction:"""
+ device = torch.device('cuda')
+ batch_size = x.size(0)
+ num_points = x.size(2)
+ x = x.view(batch_size, -1, num_points)
+
+ idx, p = knn(x, k=k) # p: -[(x1-x2)^2+...]
+
+ # here we add a tau
+ p1 = torch.abs(p)
+ p1 = torch.sqrt(p1)
+ mask = p1 < tau
+
+ # here we add a sigma
+ p = p / (sigma * sigma)
+ w = torch.exp(p) # b,n,n
+ w = torch.mul(mask.float(), w)
+
+ b = 1/torch.sum(w, dim=1)
+ b = b.reshape(batch_size, num_points, 1).repeat(1, 1, num_points)
+ c = torch.eye(num_points, num_points, device=device)
+ c = c.expand(batch_size, num_points, num_points)
+ D = b * c # b,n,n
+
+ A = torch.matmul(D, w) # normalized adjacency matrix A_hat
+
+ # Get Aij in a local area:
+ idx2 = idx.view(batch_size * num_points, -1)
+ idx_base2 = torch.arange(0, batch_size * num_points, device=device).view(-1, 1) * num_points
+ idx2 = idx2 + idx_base2
+
+ idx2 = idx2.reshape(batch_size * num_points, k)[:, 1:k]
+ idx2 = idx2.reshape(batch_size * num_points * (k - 1))
+ idx2 = idx2.view(-1)
+
+ A = A.view(-1)
+ A = A[idx2].reshape(batch_size, num_points, k - 1) # Aij: b,n,k
+ ###############
+ """Disentangling Point Clouds into Sharp(xs) and Gentle(xg) Variation Components:"""
+ idx_base = torch.arange(0, batch_size, device=device).view(-1, 1, 1) * num_points
+ idx = idx + idx_base
+ idx = idx.reshape(batch_size * num_points, k)[:, 1:k]
+ idx = idx.reshape(batch_size * num_points * (k - 1))
+
+ _, num_dims, _ = x.size()
+
+ x = x.transpose(2, 1).contiguous() # b,n,c
+ neighbor = x.view(batch_size * num_points, -1)[idx, :]
+ neighbor = neighbor.view(batch_size, num_points, k - 1, num_dims) # b,n,k,c
+ A = A.reshape(batch_size, num_points, k - 1, 1) # b,n,k,1
+ n = A.mul(neighbor) # b,n,k,c
+ n = torch.sum(n, dim=2) # b,n,c
+
+ pai = torch.norm(x - n, dim=-1).pow(2) # Eq.(5)
+ pais = pai.topk(k=M, dim=-1)[1] # first M points as the sharp variation component
+ paig = (-pai).topk(k=M, dim=-1)[1] # last M points as the gentle variation component
+
+ pai_base = torch.arange(0, batch_size, device=device).view(-1, 1) * num_points
+ indices = (pais + pai_base).view(-1)
+ indiceg = (paig + pai_base).view(-1)
+
+ xs = x.view(batch_size * num_points, -1)[indices, :]
+ xg = x.view(batch_size * num_points, -1)[indiceg, :]
+
+ xs = xs.view(batch_size, M, -1) # b,M,c
+ xg = xg.view(batch_size, M, -1) # b,M,c
+
+ return xs, xg
+
+
+class SGCAM(nn.Module):
+ """Sharp-Gentle Complementary Attention Module:"""
+ def __init__(self, in_channels, inter_channels=None, bn_layer=True):
+ super(SGCAM, self).__init__()
+
+ self.in_channels = in_channels
+ self.inter_channels = inter_channels
+
+ if self.inter_channels is None:
+ self.inter_channels = in_channels // 2
+ if self.inter_channels == 0:
+ self.inter_channels = 1
+
+ conv_nd = nn.Conv1d
+ bn = nn.BatchNorm1d
+
+ self.g = conv_nd(in_channels=self.in_channels, out_channels=self.inter_channels,
+ kernel_size=1, stride=1, padding=0)
+
+ if bn_layer:
+ self.W = nn.Sequential(
+ conv_nd(in_channels=self.inter_channels, out_channels=self.in_channels,
+ kernel_size=1, stride=1, padding=0),
+ bn(self.in_channels)
+ )
+ nn.init.constant(self.W[1].weight, 0)
+ nn.init.constant(self.W[1].bias, 0)
+ else:
+ self.W = conv_nd(in_channels=self.inter_channels, out_channels=self.in_channels,
+ kernel_size=1, stride=1, padding=0)
+ nn.init.constant(self.W.weight, 0)
+ nn.init.constant(self.W.bias, 0)
+
+ self.theta = conv_nd(in_channels=self.in_channels, out_channels=self.inter_channels,
+ kernel_size=1, stride=1, padding=0)
+
+ self.phi = conv_nd(in_channels=self.in_channels, out_channels=self.inter_channels,
+ kernel_size=1, stride=1, padding=0)
+
+ def forward(self, x, x_2):
+ batch_size = x.size(0)
+
+ g_x = self.g(x_2).view(batch_size, self.inter_channels, -1)
+ g_x = g_x.permute(0, 2, 1)
+
+ theta_x = self.theta(x).view(batch_size, self.inter_channels, -1)
+ theta_x = theta_x.permute(0, 2, 1)
+ phi_x = self.phi(x_2).view(batch_size, self.inter_channels, -1)
+ W = torch.matmul(theta_x, phi_x) # Attention Matrix
+ N = W.size(-1)
+ W_div_C = W / N
+
+ y = torch.matmul(W_div_C, g_x)
+ y = y.permute(0, 2, 1).contiguous()
+ y = y.view(batch_size, self.inter_channels, *x.size()[2:])
+ W_y = self.W(y)
+ y = W_y + x
+
+ return y
+
diff --git a/zoo/GDANet/util/__init__.py b/zoo/GDANet/util/__init__.py
new file mode 100644
index 0000000..e69de29
diff --git a/zoo/GDANet/util/data_util.py b/zoo/GDANet/util/data_util.py
new file mode 100755
index 0000000..24734cf
--- /dev/null
+++ b/zoo/GDANet/util/data_util.py
@@ -0,0 +1,165 @@
+import glob
+import h5py
+import numpy as np
+from torch.utils.data import Dataset
+import os
+import json
+from PointWOLF import PointWOLF
+
+
+def load_data(partition):
+ all_data = []
+ all_label = []
+ for h5_name in glob.glob('./data/modelnet40_ply_hdf5_2048/ply_data_%s*.h5' % partition):
+ f = h5py.File(h5_name)
+ data = f['data'][:].astype('float32')
+ label = f['label'][:].astype('int64')
+ f.close()
+ all_data.append(data)
+ all_label.append(label)
+ all_data = np.concatenate(all_data, axis=0)
+ all_label = np.concatenate(all_label, axis=0)
+ return all_data, all_label
+
+
+def pc_normalize(pc):
+ centroid = np.mean(pc, axis=0)
+ pc = pc - centroid
+ m = np.max(np.sqrt(np.sum(pc ** 2, axis=1)))
+ pc = pc / m
+ return pc
+
+
+def translate_pointcloud(pointcloud):
+ xyz1 = np.random.uniform(low=2./3., high=3./2., size=[3])
+ xyz2 = np.random.uniform(low=-0.2, high=0.2, size=[3])
+
+ translated_pointcloud = np.add(np.multiply(pointcloud, xyz1), xyz2).astype('float32')
+ return translated_pointcloud
+
+
+def jitter_pointcloud(pointcloud, sigma=0.01, clip=0.02):
+ N, C = pointcloud.shape
+ pointcloud += np.clip(sigma * np.random.randn(N, C), -1*clip, clip)
+ return pointcloud
+
+
+# =========== ModelNet40 =================
+class ModelNet40(Dataset):
+ def __init__(self, num_points, partition='train', args=None):
+ self.data, self.label = load_data(partition)
+ self.num_points = num_points
+ self.partition = partition
+ self.PointWOLF = PointWOLF(args) if args is not None else None
+
+
+ def __getitem__(self, item):
+ pointcloud = self.data[item][:self.num_points]
+ label = self.label[item]
+ if self.partition == 'train':
+ np.random.shuffle(pointcloud)
+ if self.PointWOLF is not None:
+ _, pointcloud = self.PointWOLF(pointcloud)
+ return pointcloud, label
+
+ def __len__(self):
+ return self.data.shape[0]
+
+# =========== ShapeNet Part =================
+class PartNormalDataset(Dataset):
+ def __init__(self, npoints=2500, split='train', normalize=False):
+ self.npoints = npoints
+ self.root = './data/shapenetcore_partanno_segmentation_benchmark_v0_normal'
+ self.catfile = os.path.join(self.root, 'synsetoffset2category.txt')
+ self.cat = {}
+ self.normalize = normalize
+
+ with open(self.catfile, 'r') as f:
+ for line in f:
+ ls = line.strip().split()
+ self.cat[ls[0]] = ls[1]
+ self.cat = {k: v for k, v in self.cat.items()}
+
+ self.meta = {}
+ with open(os.path.join(self.root, 'train_test_split', 'shuffled_train_file_list.json'), 'r') as f:
+ train_ids = set([str(d.split('/')[2]) for d in json.load(f)])
+ with open(os.path.join(self.root, 'train_test_split', 'shuffled_val_file_list.json'), 'r') as f:
+ val_ids = set([str(d.split('/')[2]) for d in json.load(f)])
+ with open(os.path.join(self.root, 'train_test_split', 'shuffled_test_file_list.json'), 'r') as f:
+ test_ids = set([str(d.split('/')[2]) for d in json.load(f)])
+ for item in self.cat:
+ self.meta[item] = []
+ dir_point = os.path.join(self.root, self.cat[item])
+ fns = sorted(os.listdir(dir_point))
+
+ if split == 'trainval':
+ fns = [fn for fn in fns if ((fn[0:-4] in train_ids) or (fn[0:-4] in val_ids))]
+ elif split == 'train':
+ fns = [fn for fn in fns if fn[0:-4] in train_ids]
+ elif split == 'val':
+ fns = [fn for fn in fns if fn[0:-4] in val_ids]
+ elif split == 'test':
+ fns = [fn for fn in fns if fn[0:-4] in test_ids]
+ else:
+ print('Unknown split: %s. Exiting..' % (split))
+ exit(-1)
+
+ for fn in fns:
+ token = (os.path.splitext(os.path.basename(fn))[0])
+ self.meta[item].append(os.path.join(dir_point, token + '.txt'))
+
+ self.datapath = []
+ for item in self.cat:
+ for fn in self.meta[item]:
+ self.datapath.append((item, fn))
+
+ self.classes = dict(zip(self.cat, range(len(self.cat))))
+ # Mapping from category ('Chair') to a list of int [10,11,12,13] as segmentation labels
+ self.seg_classes = {'Earphone': [16, 17, 18], 'Motorbike': [30, 31, 32, 33, 34, 35], 'Rocket': [41, 42, 43],
+ 'Car': [8, 9, 10, 11], 'Laptop': [28, 29], 'Cap': [6, 7], 'Skateboard': [44, 45, 46],
+ 'Mug': [36, 37], 'Guitar': [19, 20, 21], 'Bag': [4, 5], 'Lamp': [24, 25, 26, 27],
+ 'Table': [47, 48, 49], 'Airplane': [0, 1, 2, 3], 'Pistol': [38, 39, 40],
+ 'Chair': [12, 13, 14, 15], 'Knife': [22, 23]}
+
+ self.cache = {} # from index to (point_set, cls, seg) tuple
+ self.cache_size = 20000
+
+ def __getitem__(self, index):
+ if index in self.cache:
+ point_set, normal, seg, cls = self.cache[index]
+ else:
+ fn = self.datapath[index]
+ cat = self.datapath[index][0]
+ cls = self.classes[cat]
+ cls = np.array([cls]).astype(np.int32)
+ data = np.loadtxt(fn[1]).astype(np.float32)
+ point_set = data[:, 0:3]
+ normal = data[:, 3:6]
+ seg = data[:, -1].astype(np.int32)
+ if len(self.cache) < self.cache_size:
+ self.cache[index] = (point_set, normal, seg, cls)
+
+ if self.normalize:
+ point_set = pc_normalize(point_set)
+
+ choice = np.random.choice(len(seg), self.npoints, replace=True)
+
+ # resample
+ # note that the number of points in some points clouds is less than 2048, thus use random.choice
+ # remember to use the same seed during train and test for a getting stable result
+ point_set = point_set[choice, :]
+ seg = seg[choice]
+ normal = normal[choice, :]
+
+ return point_set, cls, seg, normal
+
+ def __len__(self):
+ return len(self.datapath)
+
+
+if __name__ == '__main__':
+ train = ModelNet40(1024)
+ test = ModelNet40(1024, 'test')
+ for data, label in train:
+ print(data.shape)
+ print(label.shape)
diff --git a/zoo/GDANet/util/util.py b/zoo/GDANet/util/util.py
new file mode 100755
index 0000000..00afdd8
--- /dev/null
+++ b/zoo/GDANet/util/util.py
@@ -0,0 +1,69 @@
+import numpy as np
+import torch
+import torch.nn.functional as F
+
+
+def cal_loss(pred, gold, smoothing=True):
+ ''' Calculate cross entropy loss, apply label smoothing if needed. '''
+
+ gold = gold.contiguous().view(-1) # gold is the groudtruth label in the dataloader
+
+ if smoothing:
+ eps = 0.2
+ n_class = pred.size(1) # the number of feature_dim of the ouput, which is output channels
+
+ one_hot = torch.zeros_like(pred).scatter(1, gold.view(-1, 1), 1)
+ one_hot = one_hot * (1 - eps) + (1 - one_hot) * eps / (n_class - 1)
+ log_prb = F.log_softmax(pred, dim=1)
+
+ loss = -(one_hot * log_prb).sum(dim=1).mean()
+ else:
+ loss = F.cross_entropy(pred, gold, reduction='mean')
+
+ return loss
+
+
+# create a file and write the text into it:
+class IOStream():
+ def __init__(self, path):
+ self.f = open(path, 'a')
+
+ def cprint(self, text):
+ print(text)
+ self.f.write(text+'\n')
+ self.f.flush()
+
+ def close(self):
+ self.f.close()
+
+
+def to_categorical(y, num_classes):
+ """ 1-hot encodes a tensor """
+ new_y = torch.eye(num_classes)[y.cpu().data.numpy(),]
+ if (y.is_cuda):
+ return new_y.cuda(non_blocking=True)
+ return new_y
+
+
+def compute_overall_iou(pred, target, num_classes):
+ shape_ious = []
+ pred = pred.max(dim=2)[1] # (batch_size, num_points) the pred_class_idx of each point in each sample
+ pred_np = pred.cpu().data.numpy()
+
+ target_np = target.cpu().data.numpy()
+ for shape_idx in range(pred.size(0)): # sample_idx
+ part_ious = []
+ for part in range(num_classes): # class_idx! no matter which category, only consider all part_classes of all categories, check all 50 classes
+ # for target, each point has a class no matter which category owns this point! also 50 classes!!!
+ # only return 1 when both belongs to this class, which means correct:
+ I = np.sum(np.logical_and(pred_np[shape_idx] == part, target_np[shape_idx] == part))
+ # always return 1 when either is belongs to this class:
+ U = np.sum(np.logical_or(pred_np[shape_idx] == part, target_np[shape_idx] == part))
+
+ F = np.sum(target_np[shape_idx] == part)
+
+ if F != 0:
+ iou = I / float(U) # iou across all points for this class
+ part_ious.append(iou) # append the iou of this class
+ shape_ious.append(np.mean(part_ious)) # each time append an average iou across all classes of this sample (sample_level!)
+ return shape_ious # [batch_size]
diff --git a/zoo/GDANet/voting_eval_modelnet.py b/zoo/GDANet/voting_eval_modelnet.py
new file mode 100755
index 0000000..c058e30
--- /dev/null
+++ b/zoo/GDANet/voting_eval_modelnet.py
@@ -0,0 +1,121 @@
+from __future__ import print_function
+import os
+import argparse
+import torch
+import torch.nn as nn
+import torch.nn.functional as F
+from util.data_util import ModelNet40
+from model.GDANet_cls import GDANET
+import numpy as np
+from torch.utils.data import DataLoader
+from util.util import cal_loss, IOStream
+import sklearn.metrics as metrics
+
+
+class PointcloudScale(object):
+ def __init__(self, scale_low=2. / 3., scale_high=3. / 2., trans_low=-0.2, trans_high=0.2, trans_open=True):
+ self.scale_low = scale_low
+ self.scale_high = scale_high
+ self.trans_low = trans_low
+ self.trans_high = trans_high
+ self.trans_open = trans_open # whether add translation during voting or not
+
+ def __call__(self, pc):
+ bsize = pc.size()[0]
+ for i in range(bsize):
+ xyz1 = np.random.uniform(low=self.scale_low, high=self.scale_high, size=[3])
+ xyz2 = np.random.uniform(low=self.trans_low, high=self.trans_high, size=[3])
+ scales = torch.from_numpy(xyz1).float().cuda()
+ trans = torch.from_numpy(xyz2).float().cuda() if self.trans_open else 0
+ pc[i, :, 0:3] = torch.mul(pc[i, :, 0:3], scales)+trans
+ return pc
+
+
+def test(args, io):
+ test_loader = DataLoader(ModelNet40(partition='test', num_points=args.num_points), num_workers=5,
+ batch_size=args.test_batch_size, shuffle=False, drop_last=False)
+
+ device = torch.device("cuda" if args.cuda else "cpu")
+ NUM_PEPEAT = 300
+ NUM_VOTE = 10
+ # Try to load models
+ model = GDANET().to(device)
+ model = nn.DataParallel(model)
+ model.load_state_dict(torch.load(args.model_path))
+ model = model.eval()
+ best_acc = 0
+
+ pointscale=PointcloudScale(scale_low=2. / 3., scale_high=3. / 2., trans_low=-0.2, trans_high=0.2, trans_open=True)
+ for i in range(NUM_PEPEAT):
+ test_true = []
+ test_pred = []
+
+ for data, label in test_loader:
+ data, label = data.to(device), label.to(device).squeeze()
+ pred = 0
+ for v in range(NUM_VOTE):
+ new_data = data
+ batch_size = data.size()[0]
+ if v > 0:
+ new_data.data = pointscale(new_data.data)
+ with torch.no_grad():
+ pred += F.softmax(model(new_data.permute(0, 2, 1)), dim=1)
+ pred /= NUM_VOTE
+ label = label.view(-1)
+ pred_choice = pred.max(dim=1)[1]
+ test_true.append(label.cpu().numpy())
+ test_pred.append(pred_choice.detach().cpu().numpy())
+ test_true = np.concatenate(test_true)
+ test_pred = np.concatenate(test_pred)
+ test_acc = metrics.accuracy_score(test_true, test_pred)
+ if test_acc > best_acc:
+ best_acc = test_acc
+ outstr = 'Voting %d, test acc: %.6f,' % (i, test_acc*100)
+ io.cprint(outstr)
+
+ final_outstr = 'Final voting result test acc: %.6f,' % (best_acc * 100)
+ io.cprint(final_outstr)
+
+
+def _init_():
+ if not os.path.exists('checkpoints'):
+ os.makedirs('checkpoints')
+ if not os.path.exists('checkpoints/'+args.exp_name):
+ os.makedirs('checkpoints/'+args.exp_name)
+
+ os.system('cp voting_eval_modelnet.py checkpoints'+'/'+args.exp_name+'/'+'voting_eval_modelnet.py.backup')
+
+
+if __name__ == "__main__":
+ parser = argparse.ArgumentParser(description='3D Object Classification')
+ parser.add_argument('--exp_name', type=str, default='GDANet', metavar='N',
+ help='Name of the experiment')
+ parser.add_argument('--test_batch_size', type=int, default=16, metavar='batch_size',
+ help='Size of batch)')
+ parser.add_argument('--no_cuda', type=bool, default=False,
+ help='enables CUDA training')
+ parser.add_argument('--seed', type=int, default=1, metavar='S',
+ help='random seed (default: 1)')
+ parser.add_argument('--num_points', type=int, default=1024,
+ help='num of points to use')
+ parser.add_argument('--model_path', type=str, default='', metavar='N',
+ help='Pretrained model path')
+ parser.add_argument('--trans_open', type=bool, default=True, metavar='N',
+ help='enables input translation during voting')
+ args = parser.parse_args()
+
+ _init_()
+
+ io = IOStream('checkpoints/' + args.exp_name + '/%s_voting.log' % (args.exp_name))
+
+ io.cprint(str(args))
+
+ args.cuda = not args.no_cuda and torch.cuda.is_available()
+ torch.manual_seed(args.seed)
+ if args.cuda:
+ io.cprint('Using GPU')
+ torch.cuda.manual_seed(args.seed)
+ else:
+ io.cprint('Using CPU')
+
+ test(args, io)
diff --git a/zoo/OcCo/LICENSE b/zoo/OcCo/LICENSE
new file mode 100644
index 0000000..d3559dd
--- /dev/null
+++ b/zoo/OcCo/LICENSE
@@ -0,0 +1,21 @@
+MIT License
+
+Copyright (c) 2020 Hanchen Wang
+
+Permission is hereby granted, free of charge, to any person obtaining a copy
+of this software and associated documentation files (the "Software"), to deal
+in the Software without restriction, including without limitation the rights
+to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
+copies of the Software, and to permit persons to whom the Software is
+furnished to do so, subject to the following conditions:
+
+The above copyright notice and this permission notice shall be included in all
+copies or substantial portions of the Software.
+
+THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
+AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
+OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
+SOFTWARE.
diff --git a/zoo/OcCo/OcCo_TF/.gitignore b/zoo/OcCo/OcCo_TF/.gitignore
new file mode 100644
index 0000000..f53688f
--- /dev/null
+++ b/zoo/OcCo/OcCo_TF/.gitignore
@@ -0,0 +1,143 @@
+# others code
+results/*/plots
+log/
+demo/
+demo_data/
+para_restored.txt
+pc_distance/__pycache__
+
+# Byte-compiled / optimized / DLL files
+.idea/
+.DS_Store
+__pycache__/
+*.py[cod]
+*$py.class
+*.sh
+*/*.sh
+data/*
+/render/dump*
+
+# C extensions
+*.so
+
+# Distribution / packaging
+.Python
+build/
+develop-eggs/
+dist/
+downloads/
+eggs/
+.eggs/
+lib/
+lib64/
+parts/
+sdist/
+var/
+wheels/
+pip-wheel-metadata/
+share/python-wheels/
+*.egg-info/
+.installed.cfg
+*.egg
+MANIFEST
+
+# PyInstaller
+# Usually these files are written by a python script from a template
+# before PyInstaller builds the exe, so as to inject date/other infos into it.
+*.manifest
+*.spec
+
+# Installer logs
+pip-log.txt
+pip-delete-this-directory.txt
+
+# Unit test / coverage reports
+htmlcov/
+.tox/
+.nox/
+.coverage
+.coverage.*
+.cache
+nosetests.xml
+coverage.xml
+*.cover
+*.py,cover
+.hypothesis/
+.pytest_cache/
+
+# Translations
+*.mo
+*.pot
+
+# Django stuff:
+*.log
+local_settings.py
+db.sqlite3
+db.sqlite3-journal
+
+# Flask stuff:
+instance/
+.webassets-cache
+
+# Scrapy stuff:
+.scrapy
+
+# Sphinx documentation
+docs/_build/
+
+# PyBuilder
+target/
+
+# Jupyter Notebook
+.ipynb_checkpoints
+
+# IPython
+profile_default/
+ipython_config.py
+
+# pyenv
+.python-version
+
+# pipenv
+# According to pypa/pipenv#598, it is recommended to include Pipfile.lock in version control.
+# However, in case of collaboration, if having platform-specific dependencies or dependencies
+# having no cross-platform support, pipenv may install dependencies that don't work, or not
+# install all needed dependencies.
+#Pipfile.lock
+
+# PEP 582; used by e.g. github.com/David-OConnor/pyflow
+__pypackages__/
+
+# Celery stuff
+celerybeat-schedule
+celerybeat.pid
+
+# SageMath parsed files
+*.sage.py
+
+# Environments
+.env
+.venv
+env/
+venv/
+ENV/
+env.bak/
+venv.bak/
+
+# Spyder project settings
+.spyderproject
+.spyproject
+
+# Rope project settings
+.ropeproject
+
+# mkdocs documentation
+/site
+
+# mypy
+.mypy_cache/
+.dmypy.json
+dmypy.json
+
+# Pyre type checker
+.pyre/
diff --git a/zoo/OcCo/OcCo_TF/Requirements_TF.txt b/zoo/OcCo/OcCo_TF/Requirements_TF.txt
new file mode 100644
index 0000000..f580d1e
--- /dev/null
+++ b/zoo/OcCo/OcCo_TF/Requirements_TF.txt
@@ -0,0 +1,12 @@
+# Originally Designed for Docker Environment, TensorFlow 1.12.0/1.15.0, Python 3.7, CUDA 10.0
+
+lmdb>=0.9
+numpy>=1.14.0
+h5py >= 2.10.0
+msgpack==0.5.6
+pyarrow>=0.10.0
+open3d>=0.9.0.0
+tensorpack>=0.8.9
+matplotlib>=2.1.0
+tensorflow==2.4.0
+open3d-python==0.7.0.0
diff --git a/zoo/OcCo/OcCo_TF/cls_models/__init__.py b/zoo/OcCo/OcCo_TF/cls_models/__init__.py
new file mode 100644
index 0000000..c7c2541
--- /dev/null
+++ b/zoo/OcCo/OcCo_TF/cls_models/__init__.py
@@ -0,0 +1,2 @@
+# Copyright (c) 2020. Author: Hanchen Wang, hc.wang96@gmail.com
+
diff --git a/zoo/OcCo/OcCo_TF/cls_models/dgcnn_cls.py b/zoo/OcCo/OcCo_TF/cls_models/dgcnn_cls.py
new file mode 100644
index 0000000..dae5440
--- /dev/null
+++ b/zoo/OcCo/OcCo_TF/cls_models/dgcnn_cls.py
@@ -0,0 +1,164 @@
+# Author: Hanchen Wang (hw501@cam.ac.uk)
+# Ref: https://github.com/WangYueFt/dgcnn/blob/master/tensorflow/models/dgcnn.py
+
+import sys, pdb, tensorflow as tf
+sys.path.append('../')
+from utils import tf_util
+from train_cls_dgcnn_torchloader import NUM_CLASSES, BATCH_SIZE, NUM_POINT
+
+
+class Model:
+ def __init__(self, inputs, npts, labels, is_training, **kwargs):
+ self.__dict__.update(kwargs) # have self.bn_decay
+ self.knn = 20
+ self.is_training = is_training
+ self.features = self.create_encoder(inputs)
+ self.pred = self.create_decoder(self.features)
+ self.loss = self.create_loss(self.pred, labels)
+
+ @staticmethod
+ def get_graph_feature(x, k):
+ """Torch: get_graph_feature = TF: adj_matrix + nn_idx + edge_feature"""
+ adj_matrix = tf_util.pairwise_distance(x)
+ nn_idx = tf_util.knn(adj_matrix, k=k)
+ x = tf_util.get_edge_feature(x, nn_idx=nn_idx, k=k)
+ return x
+
+ def create_encoder(self, point_cloud):
+ point_cloud = tf.reshape(point_cloud, (BATCH_SIZE, NUM_POINT, 3))
+
+ ''' Previous Solution Author Provided '''
+ # point_cloud_transformed = point_cloud
+ # adj_matrix = tf_util.pairwise_distance(point_cloud_transformed)
+ # nn_idx = tf_util.knn(adj_matrix, k=self.knn)
+ # x = tf_util.get_edge_feature(point_cloud_transformed, nn_idx=nn_idx, k=self.knn)
+
+ x = self.get_graph_feature(point_cloud, self.knn)
+ x = tf_util.conv2d(x, 64, [1, 1],
+ padding='VALID', stride=[1, 1],
+ bn=True, bias=False, is_training=self.is_training,
+ activation_fn=tf.nn.leaky_relu, scope='conv1', bn_decay=self.bn_decay)
+ x1 = tf.reduce_max(x, axis=-2, keep_dims=True)
+
+ x = self.get_graph_feature(x1, self.knn)
+ x = tf_util.conv2d(x, 64, [1, 1],
+ padding='VALID', stride=[1, 1],
+ bn=True, bias=False, is_training=self.is_training,
+ activation_fn=tf.nn.leaky_relu, scope='conv2', bn_decay=self.bn_decay)
+ x2 = tf.reduce_max(x, axis=-2, keep_dims=True)
+
+ x = self.get_graph_feature(x2, self.knn)
+ x = tf_util.conv2d(x, 128, [1, 1],
+ padding='VALID', stride=[1, 1],
+ bn=True, bias=False, is_training=self.is_training,
+ activation_fn=tf.nn.leaky_relu, scope='conv3', bn_decay=self.bn_decay)
+ x3 = tf.reduce_max(x, axis=-2, keep_dims=True)
+
+ x = self.get_graph_feature(x3, self.knn)
+ x = tf_util.conv2d(x, 256, [1, 1],
+ padding='VALID', stride=[1, 1],
+ bn=True, bias=False, is_training=self.is_training,
+ activation_fn=tf.nn.leaky_relu, scope='conv4', bn_decay=self.bn_decay)
+ x4 = tf.reduce_max(x, axis=-2, keep_dims=True)
+
+ x = tf_util.conv2d(tf.concat([x1, x2, x3, x4], axis=-1), 1024, [1, 1],
+ padding='VALID', stride=[1, 1],
+ bn=True, bias=False, is_training=self.is_training,
+ activation_fn=tf.nn.leaky_relu, scope='agg', bn_decay=self.bn_decay)
+
+ x1 = tf.reduce_max(x, axis=1, keep_dims=True)
+ x2 = tf.reduce_mean(x, axis=1, keep_dims=True)
+ # pdb.set_trace()
+ features = tf.reshape(tf.concat([x1, x2], axis=-1), [BATCH_SIZE, -1])
+ return features
+
+ def create_decoder(self, features):
+ """fully connected layers for classification with dropout"""
+
+ with tf.variable_scope('decoder_cls', reuse=tf.AUTO_REUSE):
+ # self.linear1 = nn.Linear(args.emb_dims*2, 512, bias=False)
+ features = tf_util.fully_connected(features, 512, bn=True, bias=False,
+ activation_fn=tf.nn.leaky_relu,
+ scope='linear1', is_training=self.is_training)
+ features = tf_util.dropout(features, keep_prob=0.5, scope='dp1', is_training=self.is_training)
+
+ # self.linear2 = nn.Linear(512, 256)
+ features = tf_util.fully_connected(features, 256, bn=True, bias=True,
+ activation_fn=tf.nn.leaky_relu,
+ scope='linear2', is_training=self.is_training)
+ features = tf_util.dropout(features, keep_prob=0.5, scope='dp2', is_training=self.is_training)
+
+ # self.linear3 = nn.Linear(256, output_channels)
+ pred = tf_util.fully_connected(features, NUM_CLASSES, bn=False, bias=True,
+ activation_fn=None,
+ scope='linear3', is_training=self.is_training)
+ return pred
+
+ @staticmethod
+ def create_loss(pred, label, smoothing=True):
+ # if smoothing:
+ # eps = 0.2
+ # n_class = pred.size(1)
+ #
+ # one_hot = torch.zeros_like(pred).scatter(1, gold.view(-1, 1), 1)
+ # one_hot = one_hot * (1 - eps) + (1 - one_hot) * eps / (n_class - 1)
+ # log_prb = F.log_softmax(pred, dim=1)
+ #
+ # loss = -(one_hot * log_prb).sum(dim=1).mean()
+
+ if smoothing:
+ eps = 0.2
+ # pdb.set_trace()
+ one_hot = tf.one_hot(indices=label, depth=NUM_CLASSES)
+ # tf.print(one_hot, output_stream=sys.stderr) # not working
+ # pdb.set_trace()
+ one_hot = one_hot * (1 - eps) + (1 - one_hot) * eps / (NUM_CLASSES - 1)
+ log_prb = tf.nn.log_softmax(logits=pred, axis=1)
+ # pdb.set_trace()
+ cls_loss = -tf.reduce_mean(tf.reduce_sum(one_hot * log_prb, axis=1))
+ else:
+ loss = tf.nn.sparse_softmax_cross_entropy_with_logits(logits=pred, labels=label)
+ cls_loss = tf.reduce_mean(loss)
+
+ tf.summary.scalar('classification loss', cls_loss)
+
+ return cls_loss
+
+
+if __name__ == '__main__':
+
+ batch_size, num_cls = 16, NUM_CLASSES
+ lr_clip, base_lr, lr_decay_steps, lr_decay_rate = 1e-6, 1e-4, 50000, .7
+
+ is_training_pl = tf.placeholder(tf.bool, shape=(), name='is_training')
+ global_step = tf.Variable(0, trainable=False, name='global_step')
+
+ inputs_pl = tf.placeholder(tf.float32, (1, None, 3), 'inputs')
+ npts_pl = tf.placeholder(tf.int32, (batch_size,), 'num_points')
+ labels_pl = tf.placeholder(tf.int32, (batch_size,), 'ground_truths')
+ learning_rate = tf.train.exponential_decay(base_lr, global_step, lr_decay_steps, lr_decay_rate,
+ staircase=True, name='lr')
+ learning_rate = tf.maximum(learning_rate, lr_clip)
+
+ model = Model(inputs_pl, npts_pl, labels_pl, is_training_pl)
+ trainer = tf.train.AdamOptimizer(learning_rate)
+ train_op = trainer.minimize(model.loss, global_step)
+
+ print('\n\n\n==========')
+ print('pred', model.pred)
+ print('loss', model.loss)
+ # seems like different from the what the paper has claimed:
+ saver = tf.train.Saver()
+
+ config = tf.ConfigProto()
+ config.gpu_options.allow_growth = True
+ config.allow_soft_placement = True
+ config.log_device_placement = True
+ sess = tf.Session(config=config)
+
+ # Init Weights
+ init = tf.global_variables_initializer()
+ sess.run(init, {is_training_pl: True}) # restore will cover the random initialized parameters
+
+ for idx, var in enumerate(tf.trainable_variables()):
+ print(idx, var)
diff --git a/zoo/OcCo/OcCo_TF/cls_models/pcn_cls.py b/zoo/OcCo/OcCo_TF/cls_models/pcn_cls.py
new file mode 100644
index 0000000..8827bf1
--- /dev/null
+++ b/zoo/OcCo/OcCo_TF/cls_models/pcn_cls.py
@@ -0,0 +1,92 @@
+# Copyright (c) 2020. Author: Hanchen Wang, hc.wang96@gmail.com
+
+import sys, tensorflow as tf
+sys.path.append('../')
+from utils.tf_util import mlp_conv, point_maxpool, point_unpool, fully_connected, dropout
+from train_cls import NUM_CLASSES
+# NUM_CLASSES = 40
+
+
+class Model:
+ def __init__(self, inputs, npts, labels, is_training, **kwargs):
+ self.is_training = is_training
+ self.features = self.create_encoder(inputs, npts)
+ self.pred = self.create_decoder(self.features)
+ self.loss = self.create_loss(self.pred, labels)
+
+ def create_encoder(self, inputs, npts):
+ """mini-PointNet encoder"""
+
+ with tf.variable_scope('encoder_0', reuse=tf.AUTO_REUSE):
+ features = mlp_conv(inputs, [128, 256])
+ features_global = point_unpool(point_maxpool(features, npts, keepdims=True), npts)
+ features = tf.concat([features, features_global], axis=2)
+ with tf.variable_scope('encoder_1', reuse=tf.AUTO_REUSE):
+ features = mlp_conv(features, [512, 1024])
+ features = point_maxpool(features, npts)
+ return features
+
+ def create_decoder(self, features):
+ """fully connected layers for classification with dropout"""
+
+ with tf.variable_scope('decoder_cls', reuse=tf.AUTO_REUSE):
+
+ features = fully_connected(features, 512, bn=True, scope='fc1', is_training=self.is_training)
+ features = dropout(features, keep_prob=0.7, scope='dp1', is_training=self.is_training)
+ features = fully_connected(features, 256, bn=True, scope='fc2', is_training=self.is_training)
+ features = dropout(features, keep_prob=0.7, scope='dp2', is_training=self.is_training)
+ pred = fully_connected(features, NUM_CLASSES, activation_fn=None, scope='fc3',
+ is_training=self.is_training)
+
+ return pred
+
+ def create_loss(self, pred, label):
+ """ pred: B * NUM_CLASSES,
+ label: B, """
+ loss = tf.nn.sparse_softmax_cross_entropy_with_logits(logits=pred, labels=label)
+ cls_loss = tf.reduce_mean(loss)
+ tf.summary.scalar('classification loss', cls_loss)
+
+ return cls_loss
+
+
+if __name__ == '__main__':
+
+ batch_size, num_cls = 16, NUM_CLASSES
+ lr_clip, base_lr, lr_decay_steps, lr_decay_rate = 1e-6, 1e-4, 50000, .7
+
+ is_training_pl = tf.placeholder(tf.bool, shape=(), name='is_training')
+ global_step = tf.Variable(0, trainable=False, name='global_step')
+
+ inputs_pl = tf.placeholder(tf.float32, (1, None, 3), 'inputs')
+ npts_pl = tf.placeholder(tf.int32, (batch_size,), 'num_points')
+ labels_pl = tf.placeholder(tf.int32, (batch_size,), 'ground_truths')
+ learning_rate = tf.train.exponential_decay(base_lr, global_step,
+ lr_decay_steps, lr_decay_rate,
+ staircase=True, name='lr')
+ learning_rate = tf.maximum(learning_rate, lr_clip)
+
+ # model_module = importlib.import_module('./pcn_cls', './')
+ model = Model(inputs_pl, npts_pl, labels_pl, is_training_pl)
+ trainer = tf.train.AdamOptimizer(learning_rate)
+ train_op = trainer.minimize(model.loss, global_step)
+
+ print('\n\n\n==========')
+ print('pred', model.pred)
+ print('loss', model.loss)
+ # seems like different from the what the paper has claimed:
+ saver = tf.train.Saver()
+
+ config = tf.ConfigProto()
+ config.gpu_options.allow_growth = True
+ config.allow_soft_placement = True
+ config.log_device_placement = True
+ sess = tf.Session(config=config)
+
+ # Init variables
+ init = tf.global_variables_initializer()
+ sess.run(init, {is_training_pl: True}) # restore will cover the random initialized parameters
+
+ for idx, var in enumerate(tf.trainable_variables()):
+ print(idx, var)
+
diff --git a/zoo/OcCo/OcCo_TF/cls_models/pointnet_cls.py b/zoo/OcCo/OcCo_TF/cls_models/pointnet_cls.py
new file mode 100644
index 0000000..92a2a79
--- /dev/null
+++ b/zoo/OcCo/OcCo_TF/cls_models/pointnet_cls.py
@@ -0,0 +1,128 @@
+# Copyright (c) 2020. Author: Hanchen Wang, hc.wang96@gmail.com
+
+import sys, os
+import tensorflow as tf
+BASE_DIR = os.path.dirname(__file__)
+sys.path.append(BASE_DIR)
+sys.path.append(os.path.join(BASE_DIR, '../utils'))
+from utils.tf_util import fully_connected, dropout, conv2d, max_pool2d
+from train_cls import NUM_CLASSES, BATCH_SIZE, NUM_POINT
+from utils.transform_nets import input_transform_net, feature_transform_net
+
+
+class Model:
+ def __init__(self, inputs, npts, labels, is_training, **kwargs):
+ self.__dict__.update(kwargs) # batch_decay and is_training
+ self.is_training = is_training
+ self.features = self.create_encoder(inputs, npts)
+ self.pred = self.create_decoder(self.features)
+ self.loss = self.create_loss(self.pred, labels)
+
+ def create_encoder(self, inputs, npts):
+ """PointNet encoder"""
+
+ inputs = tf.reshape(inputs, (BATCH_SIZE, NUM_POINT, 3))
+ with tf.variable_scope('transform_net1') as sc:
+ transform = input_transform_net(inputs, self.is_training, self.bn_decay, K=3)
+
+ point_cloud_transformed = tf.matmul(inputs, transform)
+ input_image = tf.expand_dims(point_cloud_transformed, -1)
+
+ net = conv2d(inputs=input_image, num_output_channels=64, kernel_size=[1, 3],
+ scope='conv1', padding='VALID', stride=[1, 1],
+ bn=True, is_training=self.is_training, bn_decay=self.bn_decay)
+ net = conv2d(inputs=net, num_output_channels=64, kernel_size=[1, 1],
+ scope='conv2', padding='VALID', stride=[1, 1],
+ bn=True, is_training=self.is_training, bn_decay=self.bn_decay)
+
+ with tf.variable_scope('transform_net2') as sc:
+ transform = feature_transform_net(net, self.is_training, self.bn_decay, K=64)
+ net_transformed = tf.matmul(tf.squeeze(net, axis=[2]), transform)
+ net_transformed = tf.expand_dims(net_transformed, [2])
+
+ '''conv2d, with kernel size of [1,1,1,1] and stride of [1,1,1,1],
+ basically equals with the MLPs'''
+
+ # use_xavier=True, stddev=1e-3, weight_decay=0.0, activation_fn=tf.nn.relu,
+ net = conv2d(net_transformed, 64, [1, 1],
+ scope='conv3', padding='VALID', stride=[1, 1],
+ bn=True, is_training=self.is_training, bn_decay=self.bn_decay)
+ net = conv2d(net, 128, [1, 1],
+ padding='VALID', stride=[1, 1],
+ bn=True, is_training=self.is_training,
+ scope='conv4', bn_decay=self.bn_decay)
+ net = conv2d(net, 1024, [1, 1],
+ padding='VALID', stride=[1, 1],
+ bn=True, is_training=self.is_training,
+ scope='conv5', bn_decay=self.bn_decay)
+
+ net = max_pool2d(net, [NUM_POINT, 1],
+ padding='VALID', scope='maxpool')
+
+ features = tf.reshape(net, [BATCH_SIZE, -1])
+ return features
+
+ def create_decoder(self, features):
+ """fully connected layers for classification with dropout"""
+
+ with tf.variable_scope('decoder_cls', reuse=tf.AUTO_REUSE):
+
+ features = fully_connected(features, 512, bn=True, scope='fc1', is_training=self.is_training)
+ features = dropout(features, keep_prob=0.7, scope='dp1', is_training=self.is_training)
+ features = fully_connected(features, 256, bn=True, scope='fc2', is_training=self.is_training)
+ features = dropout(features, keep_prob=0.7, scope='dp2', is_training=self.is_training)
+ pred = fully_connected(features, NUM_CLASSES, activation_fn=None, scope='fc3',
+ is_training=self.is_training)
+
+ return pred
+
+ def create_loss(self, pred, label):
+ """ pred: B * NUM_CLASSES,
+ label: B, """
+ loss = tf.nn.sparse_softmax_cross_entropy_with_logits(logits=pred, labels=label)
+ cls_loss = tf.reduce_mean(loss)
+ tf.summary.scalar('classification loss', cls_loss)
+
+ return cls_loss
+
+
+if __name__ == '__main__':
+
+ batch_size, num_cls = BATCH_SIZE, NUM_CLASSES
+ lr_clip, base_lr, lr_decay_steps, lr_decay_rate = 1e-6, 1e-4, 50000, .7
+
+ is_training_pl = tf.placeholder(tf.bool, shape=(), name='is_training')
+ global_step = tf.Variable(0, trainable=False, name='global_step')
+
+ inputs_pl = tf.placeholder(tf.float32, (1, None, 3), 'inputs')
+ npts_pl = tf.placeholder(tf.int32, (batch_size,), 'num_points')
+ labels_pl = tf.placeholder(tf.int32, (batch_size,), 'ground_truths')
+ learning_rate = tf.train.exponential_decay(base_lr, global_step,
+ lr_decay_steps, lr_decay_rate,
+ staircase=True, name='lr')
+ learning_rate = tf.maximum(learning_rate, lr_clip)
+
+ # model_module = importlib.import_module('./pcn_cls', './')
+ model = Model(inputs_pl, npts_pl, labels_pl, is_training_pl)
+ trainer = tf.train.AdamOptimizer(learning_rate)
+ train_op = trainer.minimize(model.loss, global_step)
+
+ print('\n\n\n==========')
+ print('pred', model.pred)
+ print('loss', model.loss)
+ # seems like different from the what the paper has claimed:
+ saver = tf.train.Saver()
+
+ config = tf.ConfigProto()
+ config.gpu_options.allow_growth = True
+ config.allow_soft_placement = True
+ config.log_device_placement = True
+ sess = tf.Session(config=config)
+
+ # Init variables
+ init = tf.global_variables_initializer()
+ sess.run(init, {is_training_pl: True}) # restore will cover the random initialized parameters
+
+ for idx, var in enumerate(tf.trainable_variables()):
+ print(idx, var)
+
diff --git a/zoo/OcCo/OcCo_TF/completion_models/__init__.py b/zoo/OcCo/OcCo_TF/completion_models/__init__.py
new file mode 100644
index 0000000..c7c2541
--- /dev/null
+++ b/zoo/OcCo/OcCo_TF/completion_models/__init__.py
@@ -0,0 +1,2 @@
+# Copyright (c) 2020. Author: Hanchen Wang, hc.wang96@gmail.com
+
diff --git a/zoo/OcCo/OcCo_TF/completion_models/dgcnn_cd.py b/zoo/OcCo/OcCo_TF/completion_models/dgcnn_cd.py
new file mode 100644
index 0000000..4ebd3a4
--- /dev/null
+++ b/zoo/OcCo/OcCo_TF/completion_models/dgcnn_cd.py
@@ -0,0 +1,137 @@
+# Copyright (c) 2020. Author: Hanchen Wang, hc.wang96@gmail.com
+
+# author: Hanchen Wang
+
+import os, sys, tensorflow as tf
+BASE_DIR = os.path.dirname(__file__)
+sys.path.append(BASE_DIR)
+sys.path.append('../')
+sys.path.append(os.path.join(BASE_DIR, '../utils'))
+from utils import tf_util
+from utils.transform_nets import input_transform_net_dgcnn
+from train_completion import BATCH_SIZE, NUM_POINT
+
+# BATCH_SIZE = 8 # otherwise set to 8
+# NUM_POINT = 2048 # 3000
+
+
+class Model:
+ def __init__(self, inputs, npts, gt, alpha, **kwargs):
+ self.knn = 20
+ self.__dict__.update(kwargs) # batch_decay and is_training
+ self.num_output_points = 16384 # 1024 * 16
+ self.num_coarse = 1024
+ self.grid_size = 4
+ self.grid_scale = 0.05
+ self.num_fine = self.grid_size ** 2 * self.num_coarse
+ self.features = self.create_encoder(inputs, npts)
+ self.coarse, self.fine = self.create_decoder(self.features)
+ self.loss, self.update = self.create_loss(gt, alpha)
+ self.outputs = self.fine
+ self.visualize_ops = [tf.split(inputs[0], npts, axis=0), self.coarse, self.fine, gt]
+ self.visualize_titles = ['input', 'coarse output', 'fine output', 'ground truth']
+
+ def create_encoder(self, point_cloud, npts):
+
+ point_cloud = tf.reshape(point_cloud, (BATCH_SIZE, NUM_POINT, 3))
+
+ adj_matrix = tf_util.pairwise_distance(point_cloud)
+ nn_idx = tf_util.knn(adj_matrix, k=self.knn)
+ edge_feature = tf_util.get_edge_feature(point_cloud, nn_idx=nn_idx, k=self.knn)
+
+ with tf.variable_scope('transform_net1') as sc:
+ transform = input_transform_net_dgcnn(edge_feature, self.is_training, self.bn_decay, K=3)
+
+ point_cloud_transformed = tf.matmul(point_cloud, transform)
+ adj_matrix = tf_util.pairwise_distance(point_cloud_transformed)
+ nn_idx = tf_util.knn(adj_matrix, k=self.knn)
+ edge_feature = tf_util.get_edge_feature(point_cloud_transformed, nn_idx=nn_idx, k=self.knn)
+
+ net = tf_util.conv2d(edge_feature, 64, [1, 1],
+ padding='VALID', stride=[1, 1],
+ bn=True, is_training=self.is_training,
+ scope='dgcnn1', bn_decay=self.bn_decay)
+ net = tf.reduce_max(net, axis=-2, keep_dims=True)
+ net1 = net
+
+ adj_matrix = tf_util.pairwise_distance(net)
+ nn_idx = tf_util.knn(adj_matrix, k=self.knn)
+ edge_feature = tf_util.get_edge_feature(net, nn_idx=nn_idx, k=self.knn)
+
+ net = tf_util.conv2d(edge_feature, 64, [1, 1],
+ padding='VALID', stride=[1, 1],
+ bn=True, is_training=self.is_training,
+ scope='dgcnn2', bn_decay=self.bn_decay)
+ net = tf.reduce_max(net, axis=-2, keep_dims=True)
+ net2 = net
+
+ adj_matrix = tf_util.pairwise_distance(net)
+ nn_idx = tf_util.knn(adj_matrix, k=self.knn)
+ edge_feature = tf_util.get_edge_feature(net, nn_idx=nn_idx, k=self.knn)
+
+ net = tf_util.conv2d(edge_feature, 64, [1, 1],
+ padding='VALID', stride=[1, 1],
+ bn=True, is_training=self.is_training,
+ scope='dgcnn3', bn_decay=self.bn_decay)
+ net = tf.reduce_max(net, axis=-2, keep_dims=True)
+ net3 = net
+
+ adj_matrix = tf_util.pairwise_distance(net)
+ nn_idx = tf_util.knn(adj_matrix, k=self.knn)
+ edge_feature = tf_util.get_edge_feature(net, nn_idx=nn_idx, k=self.knn)
+
+ net = tf_util.conv2d(edge_feature, 128, [1, 1],
+ padding='VALID', stride=[1, 1],
+ bn=True, is_training=self.is_training,
+ scope='dgcnn4', bn_decay=self.bn_decay)
+ net = tf.reduce_max(net, axis=-2, keep_dims=True)
+ net4 = net
+
+ net = tf_util.conv2d(tf.concat([net1, net2, net3, net4], axis=-1), 1024, [1, 1],
+ padding='VALID', stride=[1, 1],
+ bn=True, is_training=self.is_training,
+ scope='agg', bn_decay=self.bn_decay)
+
+ net = tf.reduce_max(net, axis=1, keep_dims=True)
+
+ features = tf.reshape(net, [BATCH_SIZE, -1])
+ return features
+
+ def create_decoder(self, features):
+ with tf.variable_scope('decoder', reuse=tf.AUTO_REUSE):
+ coarse = tf_util.mlp(features, [1024, 1024, self.num_coarse * 3])
+ coarse = tf.reshape(coarse, [-1, self.num_coarse, 3])
+
+ with tf.variable_scope('folding', reuse=tf.AUTO_REUSE):
+ grid = tf.meshgrid(tf.linspace(-0.05, 0.05, self.grid_size), tf.linspace(-0.05, 0.05, self.grid_size))
+ grid = tf.expand_dims(tf.reshape(tf.stack(grid, axis=2), [-1, 2]), 0)
+ grid_feat = tf.tile(grid, [features.shape[0], self.num_coarse, 1])
+
+ point_feat = tf.tile(tf.expand_dims(coarse, 2), [1, 1, self.grid_size ** 2, 1])
+ point_feat = tf.reshape(point_feat, [-1, self.num_fine, 3])
+
+ global_feat = tf.tile(tf.expand_dims(features, 1), [1, self.num_fine, 1])
+
+ feat = tf.concat([grid_feat, point_feat, global_feat], axis=2)
+
+ center = tf.tile(tf.expand_dims(coarse, 2), [1, 1, self.grid_size ** 2, 1])
+ center = tf.reshape(center, [-1, self.num_fine, 3])
+
+ fine = tf_util.mlp_conv(feat, [512, 512, 3]) + center
+ return coarse, fine
+
+ def create_loss(self, gt, alpha):
+
+ loss_coarse = tf_util.chamfer(self.coarse, gt)
+ tf_util.add_train_summary('train/coarse_loss', loss_coarse)
+ update_coarse = tf_util.add_valid_summary('valid/coarse_loss', loss_coarse)
+
+ loss_fine = tf_util.chamfer(self.fine, gt)
+ tf_util.add_train_summary('train/fine_loss', loss_fine)
+ update_fine = tf_util.add_valid_summary('valid/fine_loss', loss_fine)
+
+ loss = loss_coarse + alpha * loss_fine
+ tf_util.add_train_summary('train/loss', loss)
+ update_loss = tf_util.add_valid_summary('valid/loss', loss)
+
+ return loss, [update_coarse, update_fine, update_loss]
diff --git a/zoo/OcCo/OcCo_TF/completion_models/dgcnn_emd.py b/zoo/OcCo/OcCo_TF/completion_models/dgcnn_emd.py
new file mode 100644
index 0000000..0fa1120
--- /dev/null
+++ b/zoo/OcCo/OcCo_TF/completion_models/dgcnn_emd.py
@@ -0,0 +1,138 @@
+# Copyright (c) 2020. Author: Hanchen Wang, hc.wang96@gmail.com
+
+# author: Hanchen Wang
+
+import os, sys, tensorflow as tf
+BASE_DIR = os.path.dirname(__file__)
+sys.path.append(BASE_DIR)
+sys.path.append('../')
+sys.path.append(os.path.join(BASE_DIR, '../utils'))
+from utils import tf_util
+from utils.transform_nets import input_transform_net_dgcnn
+from train_completion import BATCH_SIZE, NUM_POINT
+
+# BATCH_SIZE = 8 # otherwise set to 8
+# NUM_POINT = 2048 # 3000
+
+
+class Model:
+ def __init__(self, inputs, npts, gt, alpha, **kwargs):
+ self.knn = 20
+ self.__dict__.update(kwargs) # batch_decay and is_training
+ self.num_output_points = 16384 # 1024 * 16
+ self.num_coarse = 1024
+ self.grid_size = 4
+ self.grid_scale = 0.05
+ self.num_fine = self.grid_size ** 2 * self.num_coarse
+ self.features = self.create_encoder(inputs, npts)
+ self.coarse, self.fine = self.create_decoder(self.features)
+ self.loss, self.update = self.create_loss(gt, alpha)
+ self.outputs = self.fine
+ self.visualize_ops = [tf.split(inputs[0], npts, axis=0), self.coarse, self.fine, gt]
+ self.visualize_titles = ['input', 'coarse output', 'fine output', 'ground truth']
+
+ def create_encoder(self, point_cloud, npts):
+
+ point_cloud = tf.reshape(point_cloud, (BATCH_SIZE, NUM_POINT, 3))
+
+ adj_matrix = tf_util.pairwise_distance(point_cloud)
+ nn_idx = tf_util.knn(adj_matrix, k=self.knn)
+ edge_feature = tf_util.get_edge_feature(point_cloud, nn_idx=nn_idx, k=self.knn)
+
+ with tf.variable_scope('transform_net1') as sc:
+ transform = input_transform_net_dgcnn(edge_feature, self.is_training, self.bn_decay, K=3)
+
+ point_cloud_transformed = tf.matmul(point_cloud, transform)
+ adj_matrix = tf_util.pairwise_distance(point_cloud_transformed)
+ nn_idx = tf_util.knn(adj_matrix, k=self.knn)
+ edge_feature = tf_util.get_edge_feature(point_cloud_transformed, nn_idx=nn_idx, k=self.knn)
+
+ net = tf_util.conv2d(edge_feature, 64, [1, 1],
+ padding='VALID', stride=[1, 1],
+ bn=True, is_training=self.is_training,
+ scope='dgcnn1', bn_decay=self.bn_decay)
+ net = tf.reduce_max(net, axis=-2, keep_dims=True)
+ net1 = net
+
+ adj_matrix = tf_util.pairwise_distance(net)
+ nn_idx = tf_util.knn(adj_matrix, k=self.knn)
+ edge_feature = tf_util.get_edge_feature(net, nn_idx=nn_idx, k=self.knn)
+
+ net = tf_util.conv2d(edge_feature, 64, [1, 1],
+ padding='VALID', stride=[1, 1],
+ bn=True, is_training=self.is_training,
+ scope='dgcnn2', bn_decay=self.bn_decay)
+ net = tf.reduce_max(net, axis=-2, keep_dims=True)
+ net2 = net
+
+ adj_matrix = tf_util.pairwise_distance(net)
+ nn_idx = tf_util.knn(adj_matrix, k=self.knn)
+ edge_feature = tf_util.get_edge_feature(net, nn_idx=nn_idx, k=self.knn)
+
+ net = tf_util.conv2d(edge_feature, 64, [1, 1],
+ padding='VALID', stride=[1, 1],
+ bn=True, is_training=self.is_training,
+ scope='dgcnn3', bn_decay=self.bn_decay)
+ net = tf.reduce_max(net, axis=-2, keep_dims=True)
+ net3 = net
+
+ adj_matrix = tf_util.pairwise_distance(net)
+ nn_idx = tf_util.knn(adj_matrix, k=self.knn)
+ edge_feature = tf_util.get_edge_feature(net, nn_idx=nn_idx, k=self.knn)
+
+ net = tf_util.conv2d(edge_feature, 128, [1, 1],
+ padding='VALID', stride=[1, 1],
+ bn=True, is_training=self.is_training,
+ scope='dgcnn4', bn_decay=self.bn_decay)
+ net = tf.reduce_max(net, axis=-2, keep_dims=True)
+ net4 = net
+
+ net = tf_util.conv2d(tf.concat([net1, net2, net3, net4], axis=-1), 1024, [1, 1],
+ padding='VALID', stride=[1, 1],
+ bn=True, is_training=self.is_training,
+ scope='agg', bn_decay=self.bn_decay)
+
+ net = tf.reduce_max(net, axis=1, keep_dims=True)
+
+ features = tf.reshape(net, [BATCH_SIZE, -1])
+ return features
+
+ def create_decoder(self, features):
+ with tf.variable_scope('decoder', reuse=tf.AUTO_REUSE):
+ coarse = tf_util.mlp(features, [1024, 1024, self.num_coarse * 3])
+ coarse = tf.reshape(coarse, [-1, self.num_coarse, 3])
+
+ with tf.variable_scope('folding', reuse=tf.AUTO_REUSE):
+ grid = tf.meshgrid(tf.linspace(-0.05, 0.05, self.grid_size), tf.linspace(-0.05, 0.05, self.grid_size))
+ grid = tf.expand_dims(tf.reshape(tf.stack(grid, axis=2), [-1, 2]), 0)
+ grid_feat = tf.tile(grid, [features.shape[0], self.num_coarse, 1])
+
+ point_feat = tf.tile(tf.expand_dims(coarse, 2), [1, 1, self.grid_size ** 2, 1])
+ point_feat = tf.reshape(point_feat, [-1, self.num_fine, 3])
+
+ global_feat = tf.tile(tf.expand_dims(features, 1), [1, self.num_fine, 1])
+
+ feat = tf.concat([grid_feat, point_feat, global_feat], axis=2)
+
+ center = tf.tile(tf.expand_dims(coarse, 2), [1, 1, self.grid_size ** 2, 1])
+ center = tf.reshape(center, [-1, self.num_fine, 3])
+
+ fine = tf_util.mlp_conv(feat, [512, 512, 3]) + center
+ return coarse, fine
+
+ def create_loss(self, gt, alpha):
+
+ gt_ds = gt[:, :self.coarse.shape[1], :]
+ loss_coarse = tf_util.earth_mover(self.coarse, gt_ds)
+ tf_util.add_train_summary('train/coarse_loss', loss_coarse)
+ update_coarse = tf_util.add_valid_summary('valid/coarse_loss', loss_coarse)
+
+ loss_fine = tf_util.chamfer(self.fine, gt)
+ tf_util.add_train_summary('train/fine_loss', loss_fine)
+ update_fine = tf_util.add_valid_summary('valid/fine_loss', loss_fine)
+
+ loss = loss_coarse + alpha * loss_fine
+ tf_util.add_train_summary('train/loss', loss)
+ update_loss = tf_util.add_valid_summary('valid/loss', loss)
+
+ return loss, [update_coarse, update_fine, update_loss]
diff --git a/zoo/OcCo/OcCo_TF/completion_models/pcn_cd.py b/zoo/OcCo/OcCo_TF/completion_models/pcn_cd.py
new file mode 100644
index 0000000..9395e26
--- /dev/null
+++ b/zoo/OcCo/OcCo_TF/completion_models/pcn_cd.py
@@ -0,0 +1,74 @@
+# Copyright (c) 2020. Hanchen Wang, hw501@cam.ac.uk
+# Ref: https://github.com/wentaoyuan/pcn/blob/master/models/pcn_cd.py
+
+import pdb, tensorflow as tf
+from utils.tf_util import mlp, mlp_conv, point_maxpool, point_unpool, chamfer, \
+ add_train_summary, add_valid_summary
+
+
+class Model:
+ def __init__(self, inputs, npts, gt, alpha, **kwargs):
+ self.__dict__.update(kwargs) # batch_decay and is_training
+ self.num_coarse = 1024
+ self.grid_size = 4
+ self.grid_scale = 0.05
+ self.num_fine = self.grid_size ** 2 * self.num_coarse
+ self.features = self.create_encoder(inputs, npts)
+ self.coarse, self.fine = self.create_decoder(self.features)
+ self.loss, self.update = self.create_loss(self.coarse, self.fine, gt, alpha)
+ self.outputs = self.fine
+ self.visualize_ops = [tf.split(inputs[0], npts, axis=0), self.coarse, self.fine, gt]
+ self.visualize_titles = ['input', 'coarse output', 'fine output', 'ground truth']
+
+ def create_encoder(self, inputs, npts):
+ with tf.variable_scope('encoder_0', reuse=tf.AUTO_REUSE):
+ features = mlp_conv(inputs, [128, 256])
+ features_global = point_unpool(point_maxpool(features, npts, keepdims=True), npts)
+ features = tf.concat([features, features_global], axis=2)
+ with tf.variable_scope('encoder_1', reuse=tf.AUTO_REUSE):
+ features = mlp_conv(features, [512, 1024])
+ features = point_maxpool(features, npts)
+ return features
+
+ def create_decoder(self, features):
+ with tf.variable_scope('decoder', reuse=tf.AUTO_REUSE):
+ coarse = mlp(features, [1024, 1024, self.num_coarse * 3])
+ coarse = tf.reshape(coarse, [-1, self.num_coarse, 3])
+
+ with tf.variable_scope('folding', reuse=tf.AUTO_REUSE):
+ grid = tf.meshgrid(tf.linspace(-0.05, 0.05, self.grid_size), tf.linspace(-0.05, 0.05, self.grid_size))
+ grid = tf.expand_dims(tf.reshape(tf.stack(grid, axis=2), [-1, 2]), 0)
+ grid_feat = tf.tile(grid, [features.shape[0], self.num_coarse, 1])
+
+ point_feat = tf.tile(tf.expand_dims(coarse, 2), [1, 1, self.grid_size ** 2, 1])
+ point_feat = tf.reshape(point_feat, [-1, self.num_fine, 3])
+
+ global_feat = tf.tile(tf.expand_dims(features, 1), [1, self.num_fine, 1])
+
+ feat = tf.concat([grid_feat, point_feat, global_feat], axis=2)
+
+ center = tf.tile(tf.expand_dims(coarse, 2), [1, 1, self.grid_size ** 2, 1])
+ center = tf.reshape(center, [-1, self.num_fine, 3])
+
+ fine = mlp_conv(feat, [512, 512, 3]) + center
+ return coarse, fine
+
+ def create_loss(self, coarse, fine, gt, alpha):
+
+ # print('coarse shape:', coarse.shape)
+ # print('fine shape:', fine.shape)
+ # print('gt shape:', gt.shape)
+
+ loss_coarse = chamfer(coarse, gt)
+ add_train_summary('train/coarse_loss', loss_coarse)
+ update_coarse = add_valid_summary('valid/coarse_loss', loss_coarse)
+
+ loss_fine = chamfer(fine, gt)
+ add_train_summary('train/fine_loss', loss_fine)
+ update_fine = add_valid_summary('valid/fine_loss', loss_fine)
+
+ loss = loss_coarse + alpha * loss_fine
+ add_train_summary('train/loss', loss)
+ update_loss = add_valid_summary('valid/loss', loss)
+
+ return loss, [update_coarse, update_fine, update_loss]
diff --git a/zoo/OcCo/OcCo_TF/completion_models/pcn_emd.py b/zoo/OcCo/OcCo_TF/completion_models/pcn_emd.py
new file mode 100644
index 0000000..a33677c
--- /dev/null
+++ b/zoo/OcCo/OcCo_TF/completion_models/pcn_emd.py
@@ -0,0 +1,74 @@
+# Copyright (c) 2020. Author: Hanchen Wang, hc.wang96@gmail.com
+
+# Author: Wentao Yuan (wyuan1@cs.cmu.edu) 05/31/2018
+
+import tensorflow as tf
+from utils.tf_util import mlp_conv, point_maxpool, point_unpool, mlp, add_train_summary, \
+ add_valid_summary, earth_mover, chamfer
+
+
+class Model:
+ def __init__(self, inputs, npts, gt, alpha, **kwargs):
+ self.num_coarse = 1024
+ self.grid_size = 4
+ self.grid_scale = 0.05
+ self.num_fine = self.grid_size ** 2 * self.num_coarse
+ self.features = self.create_encoder(inputs, npts)
+ self.coarse, self.fine = self.create_decoder(self.features)
+ self.loss, self.update = self.create_loss(self.coarse, self.fine, gt, alpha)
+ self.outputs = self.fine
+ self.visualize_ops = [tf.split(inputs[0], npts, axis=0), self.coarse, self.fine, gt]
+ self.visualize_titles = ['input', 'coarse output', 'fine output', 'ground truth']
+
+ def create_encoder(self, inputs, npts):
+ with tf.variable_scope('encoder_0', reuse=tf.AUTO_REUSE):
+ features = mlp_conv(inputs, [128, 256])
+ features_global = point_unpool(point_maxpool(features, npts, keepdims=True), npts)
+ features = tf.concat([features, features_global], axis=2)
+ with tf.variable_scope('encoder_1', reuse=tf.AUTO_REUSE):
+ features = mlp_conv(features, [512, 1024])
+ features = point_maxpool(features, npts)
+ return features
+
+ def create_decoder(self, features):
+ with tf.variable_scope('decoder', reuse=tf.AUTO_REUSE):
+ coarse = mlp(features, [1024, 1024, self.num_coarse * 3])
+ coarse = tf.reshape(coarse, [-1, self.num_coarse, 3])
+
+ with tf.variable_scope('folding', reuse=tf.AUTO_REUSE):
+ x = tf.linspace(-self.grid_scale, self.grid_scale, self.grid_size)
+ y = tf.linspace(-self.grid_scale, self.grid_scale, self.grid_size)
+ grid = tf.meshgrid(x, y)
+ grid = tf.expand_dims(tf.reshape(tf.stack(grid, axis=2), [-1, 2]), 0)
+ grid_feat = tf.tile(grid, [features.shape[0], self.num_coarse, 1])
+
+ point_feat = tf.tile(tf.expand_dims(coarse, 2), [1, 1, self.grid_size ** 2, 1])
+ point_feat = tf.reshape(point_feat, [-1, self.num_fine, 3])
+
+ global_feat = tf.tile(tf.expand_dims(features, 1), [1, self.num_fine, 1])
+
+ feat = tf.concat([grid_feat, point_feat, global_feat], axis=2)
+
+ center = tf.tile(tf.expand_dims(coarse, 2), [1, 1, self.grid_size ** 2, 1])
+ center = tf.reshape(center, [-1, self.num_fine, 3])
+
+ fine = mlp_conv(feat, [512, 512, 3]) + center
+ return coarse, fine
+
+ def create_loss(self, coarse, fine, gt, alpha):
+
+ gt_ds = gt[:, :coarse.shape[1], :]
+
+ loss_coarse = earth_mover(coarse, gt_ds)
+ add_train_summary('train/coarse_loss', loss_coarse)
+ update_coarse = add_valid_summary('valid/coarse_loss', loss_coarse)
+
+ loss_fine = chamfer(fine, gt)
+ add_train_summary('train/fine_loss', loss_fine)
+ update_fine = add_valid_summary('valid/fine_loss', loss_fine)
+
+ loss = loss_coarse + alpha * loss_fine
+ add_train_summary('train/loss', loss)
+ update_loss = add_valid_summary('valid/loss', loss)
+
+ return loss, [update_coarse, update_fine, update_loss]
diff --git a/zoo/OcCo/OcCo_TF/completion_models/pointnet_cd.py b/zoo/OcCo/OcCo_TF/completion_models/pointnet_cd.py
new file mode 100644
index 0000000..4ae37b0
--- /dev/null
+++ b/zoo/OcCo/OcCo_TF/completion_models/pointnet_cd.py
@@ -0,0 +1,120 @@
+# Copyright (c) 2020. Author: Hanchen Wang, hc.wang96@gmail.com
+
+import os, sys, tensorflow as tf
+BASE_DIR = os.path.dirname(__file__)
+sys.path.append(BASE_DIR)
+sys.path.append(os.path.join(BASE_DIR, '../utils'))
+sys.path.append('../')
+from utils.tf_util import conv2d, mlp, mlp_conv, chamfer, add_valid_summary, add_train_summary, max_pool2d
+from utils.transform_nets import input_transform_net, feature_transform_net
+from train_completion import BATCH_SIZE, NUM_POINT
+
+
+class Model:
+ def __init__(self, inputs, npts, gt, alpha, **kwargs):
+ self.__dict__.update(kwargs) # batch_decay and is_training
+ self.num_output_points = 16384 # 1024 * 16
+ self.num_coarse = 1024
+ self.grid_size = 4
+ self.grid_scale = 0.05
+ self.num_fine = self.grid_size ** 2 * self.num_coarse
+ self.features = self.create_encoder(inputs, npts)
+ self.coarse, self.fine = self.create_decoder(self.features)
+ self.loss, self.update = self.create_loss(gt, alpha)
+ self.outputs = self.fine
+ self.visualize_ops = [tf.split(inputs[0], npts, axis=0), self.coarse, self.fine, gt]
+ self.visualize_titles = ['input', 'coarse output', 'fine output', 'ground truth']
+
+ def create_encoder(self, inputs, npts):
+ # with tf.variable_scope('encoder_0', reuse=tf.AUTO_REUSE):
+ # features = mlp_conv(inputs, [128, 256])
+ # features_global = tf.reduce_max(features, axis=1, keep_dims=True, name='maxpool_0')
+ # features = tf.concat([features, tf.tile(features_global, [1, tf.shape(inputs)[1], 1])], axis=2)
+ # with tf.variable_scope('encoder_1', reuse=tf.AUTO_REUSE):
+ # features = mlp_conv(features, [512, 1024])
+ # features = tf.reduce_max(features, axis=1, name='maxpool_1')
+ # end_points = {}
+
+ # if DATASET =='modelnet40':
+ inputs = tf.reshape(inputs, (BATCH_SIZE, NUM_POINT, 3))
+
+ with tf.variable_scope('transform_net1') as sc:
+ transform = input_transform_net(inputs, self.is_training, self.bn_decay, K=3)
+
+ point_cloud_transformed = tf.matmul(inputs, transform)
+ input_image = tf.expand_dims(point_cloud_transformed, -1)
+
+ net = conv2d(inputs=input_image, num_output_channels=64, kernel_size=[1, 3],
+ scope='conv1', padding='VALID', stride=[1, 1],
+ bn=True, is_training=self.is_training, bn_decay=self.bn_decay)
+ net = conv2d(inputs=net, num_output_channels=64, kernel_size=[1, 1],
+ scope='conv2', padding='VALID', stride=[1, 1],
+ bn=True, is_training=self.is_training, bn_decay=self.bn_decay)
+
+ with tf.variable_scope('transform_net2') as sc:
+ transform = feature_transform_net(net, self.is_training, self.bn_decay, K=64)
+ # end_points['transform'] = transform
+ net_transformed = tf.matmul(tf.squeeze(net, axis=[2]), transform)
+ net_transformed = tf.expand_dims(net_transformed, [2])
+
+ '''conv2d, with kernel size of [1,1,1,1] and stride of [1,1,1,1],
+ basically equals with the MLPs'''
+
+ # use_xavier=True, stddev=1e-3, weight_decay=0.0, activation_fn=tf.nn.relu,
+ net = conv2d(net_transformed, 64, [1, 1],
+ scope='conv3', padding='VALID', stride=[1, 1],
+ bn=True, is_training=self.is_training, bn_decay=self.bn_decay)
+ net = conv2d(net, 128, [1, 1],
+ padding='VALID', stride=[1, 1],
+ bn=True, is_training=self.is_training,
+ scope='conv4', bn_decay=self.bn_decay)
+ net = conv2d(net, 1024, [1, 1],
+ padding='VALID', stride=[1, 1],
+ bn=True, is_training=self.is_training,
+ scope='conv5', bn_decay=self.bn_decay)
+
+ net = max_pool2d(net, [NUM_POINT, 1],
+ padding='VALID', scope='maxpool')
+
+ features = tf.reshape(net, [BATCH_SIZE, -1])
+ return features
+
+ def create_decoder(self, features):
+ with tf.variable_scope('decoder', reuse=tf.AUTO_REUSE):
+ coarse = mlp(features, [1024, 1024, self.num_coarse * 3])
+ coarse = tf.reshape(coarse, [-1, self.num_coarse, 3])
+
+ with tf.variable_scope('folding', reuse=tf.AUTO_REUSE):
+ grid = tf.meshgrid(tf.linspace(-0.05, 0.05, self.grid_size),
+ tf.linspace(-0.05, 0.05, self.grid_size))
+ grid = tf.expand_dims(tf.reshape(tf.stack(grid, axis=2), [-1, 2]), 0)
+ grid_feat = tf.tile(grid, [features.shape[0], self.num_coarse, 1])
+
+ point_feat = tf.tile(tf.expand_dims(coarse, 2), [1, 1, self.grid_size ** 2, 1])
+ point_feat = tf.reshape(point_feat, [-1, self.num_fine, 3])
+
+ global_feat = tf.tile(tf.expand_dims(features, 1), [1, self.num_fine, 1])
+
+ feat = tf.concat([grid_feat, point_feat, global_feat], axis=2)
+
+ center = tf.tile(tf.expand_dims(coarse, 2), [1, 1, self.grid_size ** 2, 1])
+ center = tf.reshape(center, [-1, self.num_fine, 3])
+
+ fine = mlp_conv(feat, [512, 512, 3]) + center
+ return coarse, fine
+
+ def create_loss(self, gt, alpha):
+
+ loss_coarse = chamfer(self.coarse, gt)
+ add_train_summary('train/coarse_loss', loss_coarse)
+ update_coarse = add_valid_summary('valid/coarse_loss', loss_coarse)
+
+ loss_fine = chamfer(self.fine, gt)
+ add_train_summary('train/fine_loss', loss_fine)
+ update_fine = add_valid_summary('valid/fine_loss', loss_fine)
+
+ loss = loss_coarse + alpha * loss_fine
+ add_train_summary('train/loss', loss)
+ update_loss = add_valid_summary('valid/loss', loss)
+
+ return loss, [update_coarse, update_fine, update_loss]
diff --git a/zoo/OcCo/OcCo_TF/completion_models/pointnet_emd.py b/zoo/OcCo/OcCo_TF/completion_models/pointnet_emd.py
new file mode 100644
index 0000000..5ba9fd8
--- /dev/null
+++ b/zoo/OcCo/OcCo_TF/completion_models/pointnet_emd.py
@@ -0,0 +1,122 @@
+# Copyright (c) 2020. Author: Hanchen Wang, hc.wang96@gmail.com
+
+import os, sys, tensorflow as tf
+BASE_DIR = os.path.dirname(__file__)
+sys.path.append(BASE_DIR)
+sys.path.append(os.path.join(BASE_DIR, '../utils'))
+sys.path.append('../')
+from utils import tf_util
+from utils.transform_nets import input_transform_net, feature_transform_net
+from train_completion import BATCH_SIZE, NUM_POINT
+
+# BATCH_SIZE = 8 # otherwise set to 8
+# NUM_POINT = 2048 # 3000
+
+
+class Model:
+ def __init__(self, inputs, npts, gt, alpha, **kwargs):
+ self.__dict__.update(kwargs) # batch_decay and is_training
+ self.num_output_points = 16384 # 1024 * 16
+ self.num_coarse = 1024
+ self.grid_size = 4
+ self.grid_scale = 0.05
+ self.num_fine = self.grid_size ** 2 * self.num_coarse
+ self.features = self.create_encoder(inputs, npts)
+ self.coarse, self.fine = self.create_decoder(self.features)
+ self.loss, self.update = self.create_loss(gt, alpha)
+ self.outputs = self.fine
+ self.visualize_ops = [tf.split(inputs[0], npts, axis=0), self.coarse, self.fine, gt]
+ self.visualize_titles = ['input', 'coarse output', 'fine output', 'ground truth']
+
+ def create_encoder(self, inputs, npts):
+ # with tf.variable_scope('encoder_0', reuse=tf.AUTO_REUSE):
+ # features = mlp_conv(inputs, [128, 256])
+ # features_global = tf.reduce_max(features, axis=1, keep_dims=True, name='maxpool_0')
+ # features = tf.concat([features, tf.tile(features_global, [1, tf.shape(inputs)[1], 1])], axis=2)
+ # with tf.variable_scope('encoder_1', reuse=tf.AUTO_REUSE):
+ # features = mlp_conv(features, [512, 1024])
+ # features = tf.reduce_max(features, axis=1, name='maxpool_1')
+ # end_points = {}
+
+ inputs = tf.reshape(inputs, (BATCH_SIZE, NUM_POINT, 3))
+ with tf.variable_scope('transform_net1') as sc:
+ transform = input_transform_net(inputs, self.is_training, self.bn_decay, K=3)
+
+ point_cloud_transformed = tf.matmul(inputs, transform)
+ input_image = tf.expand_dims(point_cloud_transformed, -1)
+
+ net = tf_util.conv2d(inputs=input_image, num_output_channels=64, kernel_size=[1, 3],
+ scope='conv1', padding='VALID', stride=[1, 1],
+ bn=True, is_training=self.is_training, bn_decay=self.bn_decay)
+ net = tf_util.conv2d(inputs=net, num_output_channels=64, kernel_size=[1, 1],
+ scope='conv2', padding='VALID', stride=[1, 1],
+ bn=True, is_training=self.is_training, bn_decay=self.bn_decay)
+
+ with tf.variable_scope('transform_net2') as sc:
+ transform = feature_transform_net(net, self.is_training, self.bn_decay, K=64)
+ # end_points['transform'] = transform
+ net_transformed = tf.matmul(tf.squeeze(net, axis=[2]), transform)
+ net_transformed = tf.expand_dims(net_transformed, [2])
+
+ '''conv2d, with kernel size of [1,1,1,1] and stride of [1,1,1,1],
+ basically equals with the MLPs'''
+
+ # use_xavier=True, stddev=1e-3, weight_decay=0.0, activation_fn=tf.nn.relu,
+ net = tf_util.conv2d(net_transformed, 64, [1, 1],
+ scope='conv3', padding='VALID', stride=[1, 1],
+ bn=True, is_training=self.is_training, bn_decay=self.bn_decay)
+ net = tf_util.conv2d(net, 128, [1, 1],
+ padding='VALID', stride=[1, 1],
+ bn=True, is_training=self.is_training,
+ scope='conv4', bn_decay=self.bn_decay)
+ net = tf_util.conv2d(net, 1024, [1, 1],
+ padding='VALID', stride=[1, 1],
+ bn=True, is_training=self.is_training,
+ scope='conv5', bn_decay=self.bn_decay)
+
+ net = tf_util.max_pool2d(net, [NUM_POINT, 1],
+ padding='VALID', scope='maxpool')
+
+ features = tf.reshape(net, [BATCH_SIZE, -1])
+ return features
+
+ def create_decoder(self, features):
+ with tf.variable_scope('decoder', reuse=tf.AUTO_REUSE):
+ coarse = tf_util.mlp(features, [1024, 1024, self.num_coarse * 3])
+ coarse = tf.reshape(coarse, [-1, self.num_coarse, 3])
+
+ with tf.variable_scope('folding', reuse=tf.AUTO_REUSE):
+ grid = tf.meshgrid(tf.linspace(-0.05, 0.05, self.grid_size), tf.linspace(-0.05, 0.05, self.grid_size))
+ grid = tf.expand_dims(tf.reshape(tf.stack(grid, axis=2), [-1, 2]), 0)
+ grid_feat = tf.tile(grid, [features.shape[0], self.num_coarse, 1])
+
+ point_feat = tf.tile(tf.expand_dims(coarse, 2), [1, 1, self.grid_size ** 2, 1])
+ point_feat = tf.reshape(point_feat, [-1, self.num_fine, 3])
+
+ global_feat = tf.tile(tf.expand_dims(features, 1), [1, self.num_fine, 1])
+
+ feat = tf.concat([grid_feat, point_feat, global_feat], axis=2)
+
+ center = tf.tile(tf.expand_dims(coarse, 2), [1, 1, self.grid_size ** 2, 1])
+ center = tf.reshape(center, [-1, self.num_fine, 3])
+
+ fine = tf_util.mlp_conv(feat, [512, 512, 3]) + center
+ return coarse, fine
+
+ def create_loss(self, gt, alpha):
+
+ gt_ds = gt[:, :self.coarse.shape[1], :]
+ loss_coarse = tf_util.earth_mover(self.coarse, gt_ds)
+ # loss_coarse = earth_mover(coarse, gt_ds)
+ tf_util.add_train_summary('train/coarse_loss', loss_coarse)
+ update_coarse = tf_util.add_valid_summary('valid/coarse_loss', loss_coarse)
+
+ loss_fine = tf_util.chamfer(self.fine, gt)
+ tf_util.add_train_summary('train/fine_loss', loss_fine)
+ update_fine = tf_util.add_valid_summary('valid/fine_loss', loss_fine)
+
+ loss = loss_coarse + alpha * loss_fine
+ tf_util.add_train_summary('train/loss', loss)
+ update_loss = tf_util.add_valid_summary('valid/loss', loss)
+
+ return loss, [update_coarse, update_fine, update_loss]
diff --git a/zoo/OcCo/OcCo_TF/docker/.dockerignore b/zoo/OcCo/OcCo_TF/docker/.dockerignore
new file mode 100644
index 0000000..b8bc3db
--- /dev/null
+++ b/zoo/OcCo/OcCo_TF/docker/.dockerignore
@@ -0,0 +1,2 @@
+../data/
+../log/
diff --git a/zoo/OcCo/OcCo_TF/docker/Dockerfile_TF b/zoo/OcCo/OcCo_TF/docker/Dockerfile_TF
new file mode 100644
index 0000000..05315fc
--- /dev/null
+++ b/zoo/OcCo/OcCo_TF/docker/Dockerfile_TF
@@ -0,0 +1,47 @@
+FROM tensorflow/tensorflow:1.12.0-gpu-py3
+
+WORKDIR /workspace/OcCo_TF
+RUN mkdir /home/hcw
+RUN chmod -R 777 /home/hcw
+RUN chmod 777 /usr/bin
+RUN chmod 777 /bin
+RUN chmod 777 /usr/local/
+RUN apt-get -y update
+RUN apt-get -y install vim screen libgl1-mesa-glx
+COPY ./Requirements_TF.txt /workspace/OcCo_TF
+RUN pip install -r ../Requirements_TF.txt
+COPY ./pc_distance /workspace/OcCo_TF/pc_distance
+# RUN apt-key adv --fetch-keys http://developer.download.nvidia.com/compute/cuda/repos/ubuntu1604/x86_64/7fa2af80.pub
+# RUN apt-get install wget
+# RUN wget http://developer.download.nvidia.com/compute/cuda/repos/ubuntu1604/x86_64/cuda-repo-ubuntu1604_9.1.85-1_amd64.deb
+# RUN yes|apt -y install ./cuda-repo-ubuntu1604_9.1.85-1_amd64.deb
+# RUN wget http://developer.download.nvidia.com/compute/machine-learning/repos/ubuntu1604/x86_64/nvidia-machine-learning-repo-ubuntu1604_1.0.0-1_amd64.deb
+# RUN apt -y install ./nvidia-machine-learning-repo-ubuntu1604_1.0.0-1_amd64.deb
+
+# RUN apt-get update
+# Install the NVIDIA driver
+# Issue with driver install requires creating /usr/lib/nvidia
+# RUN mkdir /usr/lib/nvidia
+# RUN apt-get -y -o Dpkg::Options::="--force-overwrite" install --no-install-recommends nvidia-410
+# Reboot. Check that GPUs are visible using the command: nvidia-smi
+
+# Install CUDA and tools. Include optional NCCL 2.x
+# RUN apt install -y --allow-downgrades cuda9.0 cuda-cublas-9-0 cuda-cufft-9-0 cuda-curand-9-0 \
+# cuda-cusolver-9-0 cuda-cusparse-9-0 libcudnn7=7.2.1.38-1+cuda9.0 \
+# libnccl2=2.2.13-1+cuda9.0 cuda-command-line-tools-9-0
+
+# Optional: Install the TensorRT runtime (must be after CUDA install)
+# RUN apt update
+# RUN apt -y install libnvinfer4=4.1.2-1+cuda9.0
+WORKDIR /workspace/OcCo_TF/pc_distance
+RUN make
+RUN chmod -R 777 /workspace/OcCo_TF/pc_distance
+# RUN ln -s /usr/local/cuda/lib64/libcudart.so.10.0 /usr/local/cuda/lib64/libcudart.so.9.0
+RUN ln -s /usr/local/lib/python3.5/dist-packages/tensorflow/libtensorflow_framework.so /usr/local/lib/python3.5/dist-packages/tensorflow/libtensorflow_framework.so.1
+RUN mkdir -p /usr/local/nvidia/lib
+RUN cp /usr/local/lib/python3.5/dist-packages/tensorflow/libtensorflow_framework.so /usr/local/nvidia/lib/libtensorflow_framework.so.1
+
+
+RUN useradd hcw
+USER hcw
+WORKDIR /workspace/OcCo_TF
diff --git a/zoo/OcCo/OcCo_TF/pc_distance/__init__.py b/zoo/OcCo/OcCo_TF/pc_distance/__init__.py
new file mode 100644
index 0000000..480cecf
--- /dev/null
+++ b/zoo/OcCo/OcCo_TF/pc_distance/__init__.py
@@ -0,0 +1 @@
+# Copyright (c) 2020. Hanchen Wang, hw501@cam.ac.uk
diff --git a/zoo/OcCo/OcCo_TF/pc_distance/makefile b/zoo/OcCo/OcCo_TF/pc_distance/makefile
new file mode 100644
index 0000000..84a997b
--- /dev/null
+++ b/zoo/OcCo/OcCo_TF/pc_distance/makefile
@@ -0,0 +1,26 @@
+cuda_inc = /usr/local/cuda-9.0/include/
+cuda_lib = /usr/local/cuda-9.0/lib64/
+nvcc = /usr/local/cuda-9.0/bin/nvcc
+tf_inc = /usr/local/lib/python3.5/dist-packages/tensorflow/include
+tf_lib = /usr/local/lib/python3.5/dist-packages/tensorflow
+
+all: tf_nndistance_so.so tf_approxmatch_so.so
+
+tf_nndistance.cu.o: tf_nndistance.cu
+ $(nvcc) tf_nndistance.cu -o tf_nndistance.cu.o -c -O2 -DGOOGLE_CUDA=1 -x cu -Xcompiler -fPIC
+
+tf_nndistance_so.so: tf_nndistance.cpp tf_nndistance.cu.o
+ g++ tf_nndistance.cpp tf_nndistance.cu.o -o tf_nndistance_so.so \
+ -I $(cuda_inc) -I $(tf_inc) -L $(cuda_lib) -lcudart -L $(tf_lib) -ltensorflow_framework \
+ -shared -D_GLIBCXX_USE_CXX11_ABI=0 -std=c++11 -fPIC -O2
+
+tf_approxmatch.cu.o: tf_approxmatch.cu
+ $(nvcc) tf_approxmatch.cu -o tf_approxmatch.cu.o -c -O2 -DGOOGLE_CUDA=1 -x cu -Xcompiler -fPIC
+
+tf_approxmatch_so.so: tf_approxmatch.cpp tf_approxmatch.cu.o
+ g++ -shared $(CPPFLAGS) tf_approxmatch.cpp tf_approxmatch.cu.o -o tf_approxmatch_so.so \
+ -I $(cuda_inc) -I $(tf_inc) -L $(cuda_lib) -lcudart -L $(tf_lib) -ltensorflow_framework \
+ -shared -D_GLIBCXX_USE_CXX11_ABI=0 -std=c++11 -fPIC -O2
+
+clean:
+ rm -rf *.o *.so
diff --git a/zoo/OcCo/OcCo_TF/pc_distance/tf_approxmatch.cpp b/zoo/OcCo/OcCo_TF/pc_distance/tf_approxmatch.cpp
new file mode 100644
index 0000000..e12ffa9
--- /dev/null
+++ b/zoo/OcCo/OcCo_TF/pc_distance/tf_approxmatch.cpp
@@ -0,0 +1,329 @@
+#include "tensorflow/core/framework/op.h"
+#include "tensorflow/core/framework/op_kernel.h"
+#include
+#include
+#include
+using namespace tensorflow;
+REGISTER_OP("ApproxMatch")
+ .Input("xyz1: float32")
+ .Input("xyz2: float32")
+ .Output("match: float32");
+REGISTER_OP("MatchCost")
+ .Input("xyz1: float32")
+ .Input("xyz2: float32")
+ .Input("match: float32")
+ .Output("cost: float32");
+REGISTER_OP("MatchCostGrad")
+ .Input("xyz1: float32")
+ .Input("xyz2: float32")
+ .Input("match: float32")
+ .Output("grad1: float32")
+ .Output("grad2: float32");
+
+void approxmatch_cpu(int b,int n,int m,const float * xyz1,const float * xyz2,float * match){
+ for (int i=0;i saturatedl(n,double(factorl)),saturatedr(m,double(factorr));
+ std::vector weight(n*m);
+ for (int j=0;j=-2;j--){
+ //printf("i=%d j=%d\n",i,j);
+ double level=-powf(4.0,j);
+ if (j==-2)
+ level=0;
+ for (int k=0;k ss(m,1e-9);
+ for (int k=0;k ss2(m,0);
+ for (int k=0;kinput(0);
+ OP_REQUIRES(context,xyz1_tensor.dims()==3 && xyz1_tensor.shape().dim_size(2)==3,errors::InvalidArgument("ApproxMatch expects (batch_size,num_points,3) xyz1 shape"));
+ auto xyz1_flat=xyz1_tensor.flat();
+ const float * xyz1=&(xyz1_flat(0));
+ int b=xyz1_tensor.shape().dim_size(0);
+ int n=xyz1_tensor.shape().dim_size(1);
+ //OP_REQUIRES(context,n<=4096,errors::InvalidArgument("ApproxMatch handles at most 4096 dataset points"));
+
+ const Tensor& xyz2_tensor=context->input(1);
+ OP_REQUIRES(context,xyz2_tensor.dims()==3 && xyz2_tensor.shape().dim_size(2)==3 && xyz2_tensor.shape().dim_size(0)==b,errors::InvalidArgument("ApproxMatch expects (batch_size,num_points,3) xyz2 shape, and batch_size must match"));
+ int m=xyz2_tensor.shape().dim_size(1);
+ //OP_REQUIRES(context,m<=1024,errors::InvalidArgument("ApproxMatch handles at most 1024 query points"));
+ auto xyz2_flat=xyz2_tensor.flat();
+ const float * xyz2=&(xyz2_flat(0));
+ Tensor * match_tensor=NULL;
+ OP_REQUIRES_OK(context,context->allocate_output(0,TensorShape{b,m,n},&match_tensor));
+ auto match_flat=match_tensor->flat();
+ float * match=&(match_flat(0));
+ Tensor temp_tensor;
+ OP_REQUIRES_OK(context,context->allocate_temp(DataTypeToEnum::value,TensorShape{b,(n+m)*2},&temp_tensor));
+ auto temp_flat=temp_tensor.flat();
+ float * temp=&(temp_flat(0));
+ approxmatchLauncher(b,n,m,xyz1,xyz2,match,temp);
+ }
+};
+REGISTER_KERNEL_BUILDER(Name("ApproxMatch").Device(DEVICE_GPU), ApproxMatchGpuOp);
+class ApproxMatchOp: public OpKernel{
+ public:
+ explicit ApproxMatchOp(OpKernelConstruction* context):OpKernel(context){}
+ void Compute(OpKernelContext * context)override{
+ const Tensor& xyz1_tensor=context->input(0);
+ OP_REQUIRES(context,xyz1_tensor.dims()==3 && xyz1_tensor.shape().dim_size(2)==3,errors::InvalidArgument("ApproxMatch expects (batch_size,num_points,3) xyz1 shape"));
+ auto xyz1_flat=xyz1_tensor.flat();
+ const float * xyz1=&(xyz1_flat(0));
+ int b=xyz1_tensor.shape().dim_size(0);
+ int n=xyz1_tensor.shape().dim_size(1);
+ //OP_REQUIRES(context,n<=4096,errors::InvalidArgument("ApproxMatch handles at most 4096 dataset points"));
+
+ const Tensor& xyz2_tensor=context->input(1);
+ OP_REQUIRES(context,xyz2_tensor.dims()==3 && xyz2_tensor.shape().dim_size(2)==3 && xyz2_tensor.shape().dim_size(0)==b,errors::InvalidArgument("ApproxMatch expects (batch_size,num_points,3) xyz2 shape, and batch_size must match"));
+ int m=xyz2_tensor.shape().dim_size(1);
+ //OP_REQUIRES(context,m<=1024,errors::InvalidArgument("ApproxMatch handles at most 1024 query points"));
+ auto xyz2_flat=xyz2_tensor.flat();
+ const float * xyz2=&(xyz2_flat(0));
+ Tensor * match_tensor=NULL;
+ OP_REQUIRES_OK(context,context->allocate_output(0,TensorShape{b,m,n},&match_tensor));
+ auto match_flat=match_tensor->flat();
+ float * match=&(match_flat(0));
+ approxmatch_cpu(b,n,m,xyz1,xyz2,match);
+ }
+};
+REGISTER_KERNEL_BUILDER(Name("ApproxMatch").Device(DEVICE_CPU), ApproxMatchOp);
+class MatchCostGpuOp: public OpKernel{
+ public:
+ explicit MatchCostGpuOp(OpKernelConstruction* context):OpKernel(context){}
+ void Compute(OpKernelContext * context)override{
+ const Tensor& xyz1_tensor=context->input(0);
+ OP_REQUIRES(context,xyz1_tensor.dims()==3 && xyz1_tensor.shape().dim_size(2)==3,errors::InvalidArgument("MatchCost expects (batch_size,num_points,3) xyz1 shape"));
+ auto xyz1_flat=xyz1_tensor.flat();
+ const float * xyz1=&(xyz1_flat(0));
+ int b=xyz1_tensor.shape().dim_size(0);
+ int n=xyz1_tensor.shape().dim_size(1);
+
+ const Tensor& xyz2_tensor=context->input(1);
+ OP_REQUIRES(context,xyz2_tensor.dims()==3 && xyz2_tensor.shape().dim_size(2)==3 && xyz2_tensor.shape().dim_size(0)==b,errors::InvalidArgument("MatchCost expects (batch_size,num_points,3) xyz2 shape, and batch_size must match"));
+ int m=xyz2_tensor.shape().dim_size(1);
+ auto xyz2_flat=xyz2_tensor.flat();
+ const float * xyz2=&(xyz2_flat(0));
+
+ const Tensor& match_tensor=context->input(2);
+ OP_REQUIRES(context,match_tensor.dims()==3 && match_tensor.shape().dim_size(0)==b && match_tensor.shape().dim_size(1)==m && match_tensor.shape().dim_size(2)==n,errors::InvalidArgument("MatchCost expects (batch_size,#query,#dataset) match shape"));
+ auto match_flat=match_tensor.flat();
+ const float * match=&(match_flat(0));
+
+ Tensor * cost_tensor=NULL;
+ OP_REQUIRES_OK(context,context->allocate_output(0,TensorShape{b},&cost_tensor));
+ auto cost_flat=cost_tensor->flat();
+ float * cost=&(cost_flat(0));
+ matchcostLauncher(b,n,m,xyz1,xyz2,match,cost);
+ }
+};
+REGISTER_KERNEL_BUILDER(Name("MatchCost").Device(DEVICE_GPU), MatchCostGpuOp);
+class MatchCostOp: public OpKernel{
+ public:
+ explicit MatchCostOp(OpKernelConstruction* context):OpKernel(context){}
+ void Compute(OpKernelContext * context)override{
+ const Tensor& xyz1_tensor=context->input(0);
+ OP_REQUIRES(context,xyz1_tensor.dims()==3 && xyz1_tensor.shape().dim_size(2)==3,errors::InvalidArgument("MatchCost expects (batch_size,num_points,3) xyz1 shape"));
+ auto xyz1_flat=xyz1_tensor.flat();
+ const float * xyz1=&(xyz1_flat(0));
+ int b=xyz1_tensor.shape().dim_size(0);
+ int n=xyz1_tensor.shape().dim_size(1);
+
+ const Tensor& xyz2_tensor=context->input(1);
+ OP_REQUIRES(context,xyz2_tensor.dims()==3 && xyz2_tensor.shape().dim_size(2)==3 && xyz2_tensor.shape().dim_size(0)==b,errors::InvalidArgument("MatchCost expects (batch_size,num_points,3) xyz2 shape, and batch_size must match"));
+ int m=xyz2_tensor.shape().dim_size(1);
+ auto xyz2_flat=xyz2_tensor.flat();
+ const float * xyz2=&(xyz2_flat(0));
+
+ const Tensor& match_tensor=context->input(2);
+ OP_REQUIRES(context,match_tensor.dims()==3 && match_tensor.shape().dim_size(0)==b && match_tensor.shape().dim_size(1)==m && match_tensor.shape().dim_size(2)==n,errors::InvalidArgument("MatchCost expects (batch_size,#query,#dataset) match shape"));
+ auto match_flat=match_tensor.flat();
+ const float * match=&(match_flat(0));
+
+ Tensor * cost_tensor=NULL;
+ OP_REQUIRES_OK(context,context->allocate_output(0,TensorShape{b},&cost_tensor));
+ auto cost_flat=cost_tensor->flat();
+ float * cost=&(cost_flat(0));
+ matchcost_cpu(b,n,m,xyz1,xyz2,match,cost);
+ }
+};
+REGISTER_KERNEL_BUILDER(Name("MatchCost").Device(DEVICE_CPU), MatchCostOp);
+
+class MatchCostGradGpuOp: public OpKernel{
+ public:
+ explicit MatchCostGradGpuOp(OpKernelConstruction* context):OpKernel(context){}
+ void Compute(OpKernelContext * context)override{
+ const Tensor& xyz1_tensor=context->input(0);
+ OP_REQUIRES(context,xyz1_tensor.dims()==3 && xyz1_tensor.shape().dim_size(2)==3,errors::InvalidArgument("MatchCostGrad expects (batch_size,num_points,3) xyz1 shape"));
+ auto xyz1_flat=xyz1_tensor.flat();
+ const float * xyz1=&(xyz1_flat(0));
+ int b=xyz1_tensor.shape().dim_size(0);
+ int n=xyz1_tensor.shape().dim_size(1);
+
+ const Tensor& xyz2_tensor=context->input(1);
+ OP_REQUIRES(context,xyz2_tensor.dims()==3 && xyz2_tensor.shape().dim_size(2)==3 && xyz2_tensor.shape().dim_size(0)==b,errors::InvalidArgument("MatchCostGrad expects (batch_size,num_points,3) xyz2 shape, and batch_size must match"));
+ int m=xyz2_tensor.shape().dim_size(1);
+ auto xyz2_flat=xyz2_tensor.flat();
+ const float * xyz2=&(xyz2_flat(0));
+
+ const Tensor& match_tensor=context->input(2);
+ OP_REQUIRES(context,match_tensor.dims()==3 && match_tensor.shape().dim_size(0)==b && match_tensor.shape().dim_size(1)==m && match_tensor.shape().dim_size(2)==n,errors::InvalidArgument("MatchCost expects (batch_size,#query,#dataset) match shape"));
+ auto match_flat=match_tensor.flat();
+ const float * match=&(match_flat(0));
+
+ Tensor * grad1_tensor=NULL;
+ OP_REQUIRES_OK(context,context->allocate_output(0,TensorShape{b,n,3},&grad1_tensor));
+ auto grad1_flat=grad1_tensor->flat();
+ float * grad1=&(grad1_flat(0));
+ Tensor * grad2_tensor=NULL;
+ OP_REQUIRES_OK(context,context->allocate_output(1,TensorShape{b,m,3},&grad2_tensor));
+ auto grad2_flat=grad2_tensor->flat();
+ float * grad2=&(grad2_flat(0));
+ matchcostgradLauncher(b,n,m,xyz1,xyz2,match,grad1,grad2);
+ }
+};
+REGISTER_KERNEL_BUILDER(Name("MatchCostGrad").Device(DEVICE_GPU), MatchCostGradGpuOp);
+class MatchCostGradOp: public OpKernel{
+ public:
+ explicit MatchCostGradOp(OpKernelConstruction* context):OpKernel(context){}
+ void Compute(OpKernelContext * context)override{
+ const Tensor& xyz1_tensor=context->input(0);
+ OP_REQUIRES(context,xyz1_tensor.dims()==3 && xyz1_tensor.shape().dim_size(2)==3,errors::InvalidArgument("MatchCost expects (batch_size,num_points,3) xyz1 shape"));
+ auto xyz1_flat=xyz1_tensor.flat();
+ const float * xyz1=&(xyz1_flat(0));
+ int b=xyz1_tensor.shape().dim_size(0);
+ int n=xyz1_tensor.shape().dim_size(1);
+
+ const Tensor& xyz2_tensor=context->input(1);
+ OP_REQUIRES(context,xyz2_tensor.dims()==3 && xyz2_tensor.shape().dim_size(2)==3 && xyz2_tensor.shape().dim_size(0)==b,errors::InvalidArgument("MatchCost expects (batch_size,num_points,3) xyz2 shape, and batch_size must match"));
+ int m=xyz2_tensor.shape().dim_size(1);
+ auto xyz2_flat=xyz2_tensor.flat();
+ const float * xyz2=&(xyz2_flat(0));
+
+ const Tensor& match_tensor=context->input(2);
+ OP_REQUIRES(context,match_tensor.dims()==3 && match_tensor.shape().dim_size(0)==b && match_tensor.shape().dim_size(1)==m && match_tensor.shape().dim_size(2)==n,errors::InvalidArgument("MatchCost expects (batch_size,#query,#dataset) match shape"));
+ auto match_flat=match_tensor.flat();
+ const float * match=&(match_flat(0));
+
+ Tensor * grad1_tensor=NULL;
+ OP_REQUIRES_OK(context,context->allocate_output(0,TensorShape{b,n,3},&grad1_tensor));
+ auto grad1_flat=grad1_tensor->flat();
+ float * grad1=&(grad1_flat(0));
+ Tensor * grad2_tensor=NULL;
+ OP_REQUIRES_OK(context,context->allocate_output(1,TensorShape{b,m,3},&grad2_tensor));
+ auto grad2_flat=grad2_tensor->flat();
+ float * grad2=&(grad2_flat(0));
+ matchcostgrad_cpu(b,n,m,xyz1,xyz2,match,grad1,grad2);
+ }
+};
+REGISTER_KERNEL_BUILDER(Name("MatchCostGrad").Device(DEVICE_CPU), MatchCostGradOp);
diff --git a/zoo/OcCo/OcCo_TF/pc_distance/tf_approxmatch.cu b/zoo/OcCo/OcCo_TF/pc_distance/tf_approxmatch.cu
new file mode 100644
index 0000000..33c8e26
--- /dev/null
+++ b/zoo/OcCo/OcCo_TF/pc_distance/tf_approxmatch.cu
@@ -0,0 +1,296 @@
+__global__ void approxmatch(int b,int n,int m,const float * __restrict__ xyz1,const float * __restrict__ xyz2,float * __restrict__ match,float * temp){
+ float * remainL=temp+blockIdx.x*(n+m)*2, * remainR=temp+blockIdx.x*(n+m)*2+n,*ratioL=temp+blockIdx.x*(n+m)*2+n+m,*ratioR=temp+blockIdx.x*(n+m)*2+n+m+n;
+ float multiL,multiR;
+ if (n>=m){
+ multiL=1;
+ multiR=n/m;
+ }else{
+ multiL=m/n;
+ multiR=1;
+ }
+ const int Block=1024;
+ __shared__ float buf[Block*4];
+ for (int i=blockIdx.x;i=-2;j--){
+ float level=-powf(4.0f,j);
+ if (j==-2){
+ level=0;
+ }
+ for (int k0=0;k0>>(b,n,m,xyz1,xyz2,match,temp);
+}
+__global__ void matchcost(int b,int n,int m,const float * __restrict__ xyz1,const float * __restrict__ xyz2,const float * __restrict__ match,float * __restrict__ out){
+ __shared__ float allsum[512];
+ const int Block=1024;
+ __shared__ float buf[Block*3];
+ for (int i=blockIdx.x;i>>(b,n,m,xyz1,xyz2,match,out);
+}
+__global__ void matchcostgrad2(int b,int n,int m,const float * __restrict__ xyz1,const float * __restrict__ xyz2,const float * __restrict__ match,float * __restrict__ grad2){
+ __shared__ float sum_grad[256*3];
+ for (int i=blockIdx.x;i>>(b,n,m,xyz1,xyz2,match,grad1);
+ matchcostgrad2<<>>(b,n,m,xyz1,xyz2,match,grad2);
+}
+
diff --git a/zoo/OcCo/OcCo_TF/pc_distance/tf_approxmatch.py b/zoo/OcCo/OcCo_TF/pc_distance/tf_approxmatch.py
new file mode 100644
index 0000000..5ef4180
--- /dev/null
+++ b/zoo/OcCo/OcCo_TF/pc_distance/tf_approxmatch.py
@@ -0,0 +1,122 @@
+# Copyright (c) 2020. Author: Hanchen Wang, hc.wang96@gmail.com
+
+import tensorflow as tf
+from tensorflow.python.framework import ops # it turns out work
+import os.path as osp
+
+base_dir = osp.dirname(osp.abspath(__file__))
+approxmatch_module = tf.load_op_library(osp.join(base_dir, 'tf_approxmatch_so.so'))
+
+
+def approx_match(xyz1, xyz2):
+ """
+ :param xyz1: batch_size * #dataset_points * 3
+ :param xyz2: batch_size * #query_points * 3
+ :return:
+ match : batch_size * #query_points * #dataset_points
+ """
+
+ return approxmatch_module.approx_match(xyz1, xyz2)
+
+
+ops.NoGradient('ApproxMatch')
+# @tf.RegisterShape('ApproxMatch')
+@ops.RegisterShape('ApproxMatch')
+def _approx_match_shape(op):
+ shape1 = op.inputs[0].get_shape().with_rank(3)
+ shape2 = op.inputs[1].get_shape().with_rank(3)
+ return [tf.TensorShape([shape1.dims[0], shape2.dims[1], shape1.dims[1]])]
+
+
+def match_cost(xyz1, xyz2, match):
+ """
+ :param xyz1: batch_size * #dataset_points * 3
+ :param xyz2: batch_size * #query_points * 3
+ :param match: batch_size * #query_points * #dataset_points
+ :return: cost : batch_size,
+ """
+ return approxmatch_module.match_cost(xyz1, xyz2, match)
+
+
+# @tf.RegisterShape('MatchCost')
+@ops.RegisterShape('MatchCost')
+def _match_cost_shape(op):
+ shape1 = op.inputs[0].get_shape().with_rank(3)
+ # shape2 = op.inputs[1].get_shape().with_rank(3)
+ # shape3 = op.inputs[2].get_shape().with_rank(3)
+ return [tf.TensorShape([shape1.dims[0]])]
+
+
+@tf.RegisterGradient('MatchCost')
+def _match_cost_grad(op,grad_cost):
+ xyz1 = op.inputs[0]
+ xyz2 = op.inputs[1]
+ match = op.inputs[2]
+ grad_1, grad_2 = approxmatch_module.match_cost_grad(xyz1, xyz2, match)
+ return [grad_1 * tf.expand_dims(tf.expand_dims(grad_cost, 1), 2),
+ grad_2 * tf.expand_dims(tf.expand_dims(grad_cost, 1), 2), None]
+
+
+if __name__ == '__main__':
+ alpha = 0.5
+ beta = 2.0
+ # import bestmatch
+ import numpy as np
+ # import math
+ import random
+ import cv2
+
+ # import tf_nndistance
+
+ npoint = 100
+
+ with tf.device('/gpu:2'):
+ pt_in = tf.placeholder(tf.float32, shape=(1, npoint * 4, 3))
+ mypoints = tf.Variable(np.random.randn(1, npoint, 3).astype('float32'))
+ match = approx_match(pt_in, mypoints)
+ loss = tf.reduce_sum(match_cost(pt_in, mypoints, match))
+ # match=approx_match(mypoints,pt_in)
+ # loss=tf.reduce_sum(match_cost(mypoints,pt_in,match))
+ # distf,_,distb,_=tf_nndistance.nn_distance(pt_in,mypoints)
+ # loss=tf.reduce_sum((distf+1e-9)**0.5)*0.5+tf.reduce_sum((distb+1e-9)**0.5)*0.5
+ # loss=tf.reduce_max((distf+1e-9)**0.5)*0.5*npoint+tf.reduce_max((distb+1e-9)**0.5)*0.5*npoint
+
+ optimizer = tf.train.GradientDescentOptimizer(1e-4).minimize(loss)
+ with tf.Session('') as sess:
+ # sess.run(tf.initialize_all_variables())
+ sess.run(tf.global_variables_initializer())
+ while True:
+ meanloss = 0
+ meantrueloss = 0
+ for i in range(1001):
+ # phi=np.random.rand(4*npoint)*math.pi*2
+ # tpoints=(np.hstack([np.cos(phi)[:,None],np.sin(phi)[:,None],(phi*0)[:,None]])*random.random())[None,:,:]
+ # tpoints=((np.random.rand(400)-0.5)[:,None]*[0,2,0]+[(random.random()-0.5)*2,0,0]).astype('float32')[None,:,:]
+ tpoints = np.hstack([np.linspace(-1, 1, 400)[:, None],
+ (random.random() * 2 * np.linspace(1,0,400)**2)[:, None],
+ np.zeros((400,1))])[None, :, :]
+ trainloss, _ = sess.run([loss, optimizer], feed_dict={pt_in: tpoints.astype('float32')})
+ trainloss, trainmatch = sess.run([loss, match], feed_dict={pt_in: tpoints.astype('float32')})
+ # trainmatch=trainmatch.transpose((0,2,1))
+ print('trainloss: %f'%trainloss)
+ show = np.zeros((400,400,3), dtype='uint8')^255
+ trainmypoints = sess.run(mypoints)
+ ''' === visualisation ===
+ for i in range(len(tpoints[0])):
+ u = np.random.choice(range(len(trainmypoints[0])), p=trainmatch[0].T[i])
+ cv2.line(show,
+ (int(tpoints[0][i,1]*100+200),int(tpoints[0][i,0]*100+200)),
+ (int(trainmypoints[0][u,1]*100+200),int(trainmypoints[0][u,0]*100+200)),
+ cv2.cv.CV_RGB(0,255,0))
+ for x, y, z in tpoints[0]:
+ cv2.circle(show, (int(y*100+200), int(x*100+200)), 2, cv2.cv.CV_RGB(255, 0, 0))
+ for x, y, z in trainmypoints[0]:
+ cv2.circle(show, (int(y*100+200),int(x*100+200)), 3, cv2.cv.CV_RGB(0, 0, 255))
+ '''
+ cost = ((tpoints[0][:, None, :] - np.repeat(trainmypoints[0][None, :, :], 4, axis=1))**2).sum(axis=2)**0.5
+ # trueloss=bestmatch.bestmatch(cost)[0]
+ print(trainloss) # true loss
+ # cv2.imshow('show', show)
+ cmd = cv2.waitKey(10) % 256
+ if cmd == ord('q'):
+ break
diff --git a/zoo/OcCo/OcCo_TF/pc_distance/tf_nndistance.cpp b/zoo/OcCo/OcCo_TF/pc_distance/tf_nndistance.cpp
new file mode 100644
index 0000000..46b0c60
--- /dev/null
+++ b/zoo/OcCo/OcCo_TF/pc_distance/tf_nndistance.cpp
@@ -0,0 +1,254 @@
+#include "tensorflow/core/framework/op.h"
+#include "tensorflow/core/framework/op_kernel.h"
+REGISTER_OP("NnDistance")
+ .Input("xyz1: float32")
+ .Input("xyz2: float32")
+ .Output("dist1: float32")
+ .Output("idx1: int32")
+ .Output("dist2: float32")
+ .Output("idx2: int32");
+REGISTER_OP("NnDistanceGrad")
+ .Input("xyz1: float32")
+ .Input("xyz2: float32")
+ .Input("grad_dist1: float32")
+ .Input("idx1: int32")
+ .Input("grad_dist2: float32")
+ .Input("idx2: int32")
+ .Output("grad_xyz1: float32")
+ .Output("grad_xyz2: float32");
+using namespace tensorflow;
+
+static void nnsearch(int b,int n,int m,const float * xyz1,const float * xyz2,float * dist,int * idx){
+ for (int i=0;iinput(0);
+ const Tensor& xyz2_tensor=context->input(1);
+ OP_REQUIRES(context,xyz1_tensor.dims()==3,errors::InvalidArgument("NnDistance requires xyz1 be of shape (batch,#points,3)"));
+ OP_REQUIRES(context,xyz1_tensor.shape().dim_size(2)==3,errors::InvalidArgument("NnDistance only accepts 3d point set xyz1"));
+ int b=xyz1_tensor.shape().dim_size(0);
+ int n=xyz1_tensor.shape().dim_size(1);
+ OP_REQUIRES(context,xyz2_tensor.dims()==3,errors::InvalidArgument("NnDistance requires xyz2 be of shape (batch,#points,3)"));
+ OP_REQUIRES(context,xyz2_tensor.shape().dim_size(2)==3,errors::InvalidArgument("NnDistance only accepts 3d point set xyz2"));
+ int m=xyz2_tensor.shape().dim_size(1);
+ OP_REQUIRES(context,xyz2_tensor.shape().dim_size(0)==b,errors::InvalidArgument("NnDistance expects xyz1 and xyz2 have same batch size"));
+ auto xyz1_flat=xyz1_tensor.flat();
+ const float * xyz1=&xyz1_flat(0);
+ auto xyz2_flat=xyz2_tensor.flat();
+ const float * xyz2=&xyz2_flat(0);
+ Tensor * dist1_tensor=NULL;
+ Tensor * idx1_tensor=NULL;
+ Tensor * dist2_tensor=NULL;
+ Tensor * idx2_tensor=NULL;
+ OP_REQUIRES_OK(context,context->allocate_output(0,TensorShape{b,n},&dist1_tensor));
+ OP_REQUIRES_OK(context,context->allocate_output(1,TensorShape{b,n},&idx1_tensor));
+ auto dist1_flat=dist1_tensor->flat();
+ auto idx1_flat=idx1_tensor->flat();
+ OP_REQUIRES_OK(context,context->allocate_output(2,TensorShape{b,m},&dist2_tensor));
+ OP_REQUIRES_OK(context,context->allocate_output(3,TensorShape{b,m},&idx2_tensor));
+ auto dist2_flat=dist2_tensor->flat();
+ auto idx2_flat=idx2_tensor->flat();
+ float * dist1=&(dist1_flat(0));
+ int * idx1=&(idx1_flat(0));
+ float * dist2=&(dist2_flat(0));
+ int * idx2=&(idx2_flat(0));
+ nnsearch(b,n,m,xyz1,xyz2,dist1,idx1);
+ nnsearch(b,m,n,xyz2,xyz1,dist2,idx2);
+ }
+};
+REGISTER_KERNEL_BUILDER(Name("NnDistance").Device(DEVICE_CPU), NnDistanceOp);
+class NnDistanceGradOp : public OpKernel{
+ public:
+ explicit NnDistanceGradOp(OpKernelConstruction* context):OpKernel(context){}
+ void Compute(OpKernelContext * context)override{
+ const Tensor& xyz1_tensor=context->input(0);
+ const Tensor& xyz2_tensor=context->input(1);
+ const Tensor& grad_dist1_tensor=context->input(2);
+ const Tensor& idx1_tensor=context->input(3);
+ const Tensor& grad_dist2_tensor=context->input(4);
+ const Tensor& idx2_tensor=context->input(5);
+ OP_REQUIRES(context,xyz1_tensor.dims()==3,errors::InvalidArgument("NnDistanceGrad requires xyz1 be of shape (batch,#points,3)"));
+ OP_REQUIRES(context,xyz1_tensor.shape().dim_size(2)==3,errors::InvalidArgument("NnDistanceGrad only accepts 3d point set xyz1"));
+ int b=xyz1_tensor.shape().dim_size(0);
+ int n=xyz1_tensor.shape().dim_size(1);
+ OP_REQUIRES(context,xyz2_tensor.dims()==3,errors::InvalidArgument("NnDistanceGrad requires xyz2 be of shape (batch,#points,3)"));
+ OP_REQUIRES(context,xyz2_tensor.shape().dim_size(2)==3,errors::InvalidArgument("NnDistanceGrad only accepts 3d point set xyz2"));
+ int m=xyz2_tensor.shape().dim_size(1);
+ OP_REQUIRES(context,xyz2_tensor.shape().dim_size(0)==b,errors::InvalidArgument("NnDistanceGrad expects xyz1 and xyz2 have same batch size"));
+ OP_REQUIRES(context,grad_dist1_tensor.shape()==(TensorShape{b,n}),errors::InvalidArgument("NnDistanceGrad requires grad_dist1 be of shape(batch,#points)"));
+ OP_REQUIRES(context,idx1_tensor.shape()==(TensorShape{b,n}),errors::InvalidArgument("NnDistanceGrad requires idx1 be of shape(batch,#points)"));
+ OP_REQUIRES(context,grad_dist2_tensor.shape()==(TensorShape{b,m}),errors::InvalidArgument("NnDistanceGrad requires grad_dist2 be of shape(batch,#points)"));
+ OP_REQUIRES(context,idx2_tensor.shape()==(TensorShape{b,m}),errors::InvalidArgument("NnDistanceGrad requires idx2 be of shape(batch,#points)"));
+ auto xyz1_flat=xyz1_tensor.flat();
+ const float * xyz1=&xyz1_flat(0);
+ auto xyz2_flat=xyz2_tensor.flat();
+ const float * xyz2=&xyz2_flat(0);
+ auto idx1_flat=idx1_tensor.flat();
+ const int * idx1=&idx1_flat(0);
+ auto idx2_flat=idx2_tensor.flat();
+ const int * idx2=&idx2_flat(0);
+ auto grad_dist1_flat=grad_dist1_tensor.flat();
+ const float * grad_dist1=&grad_dist1_flat(0);
+ auto grad_dist2_flat=grad_dist2_tensor.flat();
+ const float * grad_dist2=&grad_dist2_flat(0);
+ Tensor * grad_xyz1_tensor=NULL;
+ OP_REQUIRES_OK(context,context->allocate_output(0,TensorShape{b,n,3},&grad_xyz1_tensor));
+ Tensor * grad_xyz2_tensor=NULL;
+ OP_REQUIRES_OK(context,context->allocate_output(1,TensorShape{b,m,3},&grad_xyz2_tensor));
+ auto grad_xyz1_flat=grad_xyz1_tensor->flat();
+ float * grad_xyz1=&grad_xyz1_flat(0);
+ auto grad_xyz2_flat=grad_xyz2_tensor->flat();
+ float * grad_xyz2=&grad_xyz2_flat(0);
+ for (int i=0;iinput(0);
+ const Tensor& xyz2_tensor=context->input(1);
+ OP_REQUIRES(context,xyz1_tensor.dims()==3,errors::InvalidArgument("NnDistance requires xyz1 be of shape (batch,#points,3)"));
+ OP_REQUIRES(context,xyz1_tensor.shape().dim_size(2)==3,errors::InvalidArgument("NnDistance only accepts 3d point set xyz1"));
+ int b=xyz1_tensor.shape().dim_size(0);
+ int n=xyz1_tensor.shape().dim_size(1);
+ OP_REQUIRES(context,xyz2_tensor.dims()==3,errors::InvalidArgument("NnDistance requires xyz2 be of shape (batch,#points,3)"));
+ OP_REQUIRES(context,xyz2_tensor.shape().dim_size(2)==3,errors::InvalidArgument("NnDistance only accepts 3d point set xyz2"));
+ int m=xyz2_tensor.shape().dim_size(1);
+ OP_REQUIRES(context,xyz2_tensor.shape().dim_size(0)==b,errors::InvalidArgument("NnDistance expects xyz1 and xyz2 have same batch size"));
+ auto xyz1_flat=xyz1_tensor.flat();
+ const float * xyz1=&xyz1_flat(0);
+ auto xyz2_flat=xyz2_tensor.flat();
+ const float * xyz2=&xyz2_flat(0);
+ Tensor * dist1_tensor=NULL;
+ Tensor * idx1_tensor=NULL;
+ Tensor * dist2_tensor=NULL;
+ Tensor * idx2_tensor=NULL;
+ OP_REQUIRES_OK(context,context->allocate_output(0,TensorShape{b,n},&dist1_tensor));
+ OP_REQUIRES_OK(context,context->allocate_output(1,TensorShape{b,n},&idx1_tensor));
+ auto dist1_flat=dist1_tensor->flat();
+ auto idx1_flat=idx1_tensor->flat();
+ OP_REQUIRES_OK(context,context->allocate_output(2,TensorShape{b,m},&dist2_tensor));
+ OP_REQUIRES_OK(context,context->allocate_output(3,TensorShape{b,m},&idx2_tensor));
+ auto dist2_flat=dist2_tensor->flat();
+ auto idx2_flat=idx2_tensor->flat();
+ float * dist1=&(dist1_flat(0));
+ int * idx1=&(idx1_flat(0));
+ float * dist2=&(dist2_flat(0));
+ int * idx2=&(idx2_flat(0));
+ NmDistanceKernelLauncher(b,n,xyz1,m,xyz2,dist1,idx1,dist2,idx2);
+ }
+};
+REGISTER_KERNEL_BUILDER(Name("NnDistance").Device(DEVICE_GPU), NnDistanceGpuOp);
+
+void NmDistanceGradKernelLauncher(int b,int n,const float * xyz1,int m,const float * xyz2,const float * grad_dist1,const int * idx1,const float * grad_dist2,const int * idx2,float * grad_xyz1,float * grad_xyz2);
+class NnDistanceGradGpuOp : public OpKernel{
+ public:
+ explicit NnDistanceGradGpuOp(OpKernelConstruction* context):OpKernel(context){}
+ void Compute(OpKernelContext * context)override{
+ const Tensor& xyz1_tensor=context->input(0);
+ const Tensor& xyz2_tensor=context->input(1);
+ const Tensor& grad_dist1_tensor=context->input(2);
+ const Tensor& idx1_tensor=context->input(3);
+ const Tensor& grad_dist2_tensor=context->input(4);
+ const Tensor& idx2_tensor=context->input(5);
+ OP_REQUIRES(context,xyz1_tensor.dims()==3,errors::InvalidArgument("NnDistanceGrad requires xyz1 be of shape (batch,#points,3)"));
+ OP_REQUIRES(context,xyz1_tensor.shape().dim_size(2)==3,errors::InvalidArgument("NnDistanceGrad only accepts 3d point set xyz1"));
+ int b=xyz1_tensor.shape().dim_size(0);
+ int n=xyz1_tensor.shape().dim_size(1);
+ OP_REQUIRES(context,xyz2_tensor.dims()==3,errors::InvalidArgument("NnDistanceGrad requires xyz2 be of shape (batch,#points,3)"));
+ OP_REQUIRES(context,xyz2_tensor.shape().dim_size(2)==3,errors::InvalidArgument("NnDistanceGrad only accepts 3d point set xyz2"));
+ int m=xyz2_tensor.shape().dim_size(1);
+ OP_REQUIRES(context,xyz2_tensor.shape().dim_size(0)==b,errors::InvalidArgument("NnDistanceGrad expects xyz1 and xyz2 have same batch size"));
+ OP_REQUIRES(context,grad_dist1_tensor.shape()==(TensorShape{b,n}),errors::InvalidArgument("NnDistanceGrad requires grad_dist1 be of shape(batch,#points)"));
+ OP_REQUIRES(context,idx1_tensor.shape()==(TensorShape{b,n}),errors::InvalidArgument("NnDistanceGrad requires idx1 be of shape(batch,#points)"));
+ OP_REQUIRES(context,grad_dist2_tensor.shape()==(TensorShape{b,m}),errors::InvalidArgument("NnDistanceGrad requires grad_dist2 be of shape(batch,#points)"));
+ OP_REQUIRES(context,idx2_tensor.shape()==(TensorShape{b,m}),errors::InvalidArgument("NnDistanceGrad requires idx2 be of shape(batch,#points)"));
+ auto xyz1_flat=xyz1_tensor.flat();
+ const float * xyz1=&xyz1_flat(0);
+ auto xyz2_flat=xyz2_tensor.flat();
+ const float * xyz2=&xyz2_flat(0);
+ auto idx1_flat=idx1_tensor.flat();
+ const int * idx1=&idx1_flat(0);
+ auto idx2_flat=idx2_tensor.flat();
+ const int * idx2=&idx2_flat(0);
+ auto grad_dist1_flat=grad_dist1_tensor.flat();
+ const float * grad_dist1=&grad_dist1_flat(0);
+ auto grad_dist2_flat=grad_dist2_tensor.flat();
+ const float * grad_dist2=&grad_dist2_flat(0);
+ Tensor * grad_xyz1_tensor=NULL;
+ OP_REQUIRES_OK(context,context->allocate_output(0,TensorShape{b,n,3},&grad_xyz1_tensor));
+ Tensor * grad_xyz2_tensor=NULL;
+ OP_REQUIRES_OK(context,context->allocate_output(1,TensorShape{b,m,3},&grad_xyz2_tensor));
+ auto grad_xyz1_flat=grad_xyz1_tensor->flat();
+ float * grad_xyz1=&grad_xyz1_flat(0);
+ auto grad_xyz2_flat=grad_xyz2_tensor->flat();
+ float * grad_xyz2=&grad_xyz2_flat(0);
+ NmDistanceGradKernelLauncher(b,n,xyz1,m,xyz2,grad_dist1,idx1,grad_dist2,idx2,grad_xyz1,grad_xyz2);
+ }
+};
+REGISTER_KERNEL_BUILDER(Name("NnDistanceGrad").Device(DEVICE_GPU), NnDistanceGradGpuOp);
diff --git a/zoo/OcCo/OcCo_TF/pc_distance/tf_nndistance.cu b/zoo/OcCo/OcCo_TF/pc_distance/tf_nndistance.cu
new file mode 100644
index 0000000..b755122
--- /dev/null
+++ b/zoo/OcCo/OcCo_TF/pc_distance/tf_nndistance.cu
@@ -0,0 +1,159 @@
+#if GOOGLE_CUDA
+#define EIGEN_USE_GPU
+// #include "third_party/eigen3/unsupported/Eigen/CXX11/Tensor"
+
+__global__ void NmDistanceKernel(int b,int n,const float * xyz,int m,const float * xyz2,float * result,int * result_i){
+ const int batch=512;
+ __shared__ float buf[batch*3];
+ for (int i=blockIdx.x;ibest){
+ result[(i*n+j)]=best;
+ result_i[(i*n+j)]=best_i;
+ }
+ }
+ __syncthreads();
+ }
+ }
+}
+void NmDistanceKernelLauncher(int b,int n,const float * xyz,int m,const float * xyz2,float * result,int * result_i,float * result2,int * result2_i){
+ NmDistanceKernel<<>>(b,n,xyz,m,xyz2,result,result_i);
+ NmDistanceKernel<<>>(b,m,xyz2,n,xyz,result2,result2_i);
+}
+__global__ void NmDistanceGradKernel(int b,int n,const float * xyz1,int m,const float * xyz2,const float * grad_dist1,const int * idx1,float * grad_xyz1,float * grad_xyz2){
+ for (int i=blockIdx.x;i>>(b,n,xyz1,m,xyz2,grad_dist1,idx1,grad_xyz1,grad_xyz2);
+ NmDistanceGradKernel<<>>(b,m,xyz2,n,xyz1,grad_dist2,idx2,grad_xyz2,grad_xyz1);
+}
+
+#endif
diff --git a/zoo/OcCo/OcCo_TF/pc_distance/tf_nndistance.py b/zoo/OcCo/OcCo_TF/pc_distance/tf_nndistance.py
new file mode 100644
index 0000000..7e858ca
--- /dev/null
+++ b/zoo/OcCo/OcCo_TF/pc_distance/tf_nndistance.py
@@ -0,0 +1,55 @@
+# Copyright (c) 2020. Hanchen Wang, hw501@cam.ac.uk
+"""Scripts for Chamfer Distance"""
+import os, tensorflow as tf
+from tensorflow.python.framework import ops
+os.environ["LD_LIBRARY_PATH"] = "/usr/local/lib/python3.5/dist-packages/tensorflow/"
+BASE_DIR = os.path.dirname(os.path.abspath(__file__))
+nn_distance_module = tf.load_op_library(os.path.join(BASE_DIR, 'tf_nndistance_so.so'))
+
+
+def nn_distance(xyz1, xyz2):
+ """
+ Computes the distance of nearest neighbors for a pair of point clouds
+ input: xyz1: (batch_size,#points_1,3) the first point cloud
+ input: xyz2: (batch_size,#points_2,3) the second point cloud
+ output: dist1: (batch_size,#point_1) distance from first to second
+ output: idx1: (batch_size,#point_1) nearest neighbor from first to second
+ output: dist2: (batch_size,#point_2) distance from second to first
+ output: idx2: (batch_size,#point_2) nearest neighbor from second to first
+ """
+ return nn_distance_module.nn_distance(xyz1, xyz2)
+
+
+@ops.RegisterGradient('NnDistance')
+def _nn_distance_grad(op, grad_dist1, grad_idx1, grad_dist2, grad_idx2):
+ xyz1 = op.inputs[0]
+ xyz2 = op.inputs[1]
+ idx1 = op.outputs[1]
+ idx2 = op.outputs[3]
+ return nn_distance_module.nn_distance_grad(xyz1, xyz2, grad_dist1, idx1, grad_dist2, idx2)
+
+
+if __name__ == '__main__':
+ import random, numpy as np
+ random.seed(100)
+ np.random.seed(100)
+
+ ''' === test code ==='''
+ with tf.Session('') as sess:
+ xyz1 = np.random.randn(32, 16384, 3).astype('float32')
+ xyz2 = np.random.randn(32, 1024, 3).astype('float32')
+ # with tf.device('/gpu:0'):
+ if True:
+ inp1 = tf.Variable(xyz1)
+ inp2 = tf.constant(xyz2)
+ reta, retb, retc, retd = nn_distance(inp1, inp2)
+ loss = tf.reduce_mean(reta) + tf.reduce_mean(retc)
+ train = tf.train.GradientDescentOptimizer(learning_rate=0.05).minimize(loss)
+ sess.run(tf.initialize_all_variables())
+
+ best = 1e100
+ for i in range(1):
+ # loss, _ = sess.run([loss, train])
+ loss, _ = sess.run([loss])
+ best = min(best, loss)
+ print(i, loss, best)
diff --git a/zoo/OcCo/OcCo_TF/readme.md b/zoo/OcCo/OcCo_TF/readme.md
new file mode 100644
index 0000000..1f4039b
--- /dev/null
+++ b/zoo/OcCo/OcCo_TF/readme.md
@@ -0,0 +1,4 @@
+## OcCo in TensorFlow
+
+
+
diff --git a/zoo/OcCo/OcCo_TF/train_cls.py b/zoo/OcCo/OcCo_TF/train_cls.py
new file mode 100644
index 0000000..4cab66a
--- /dev/null
+++ b/zoo/OcCo/OcCo_TF/train_cls.py
@@ -0,0 +1,292 @@
+# Copyright (c) 2020. Hanchen Wang, hw501@cam.ac.uk
+
+import os, sys, pdb, time, argparse, datetime, importlib, numpy as np, tensorflow as tf
+from termcolor import colored
+from utils.Dataset_Assign import Dataset_Assign
+from utils.EarlyStoppingCriterion import EarlyStoppingCriterion
+from utils.tf_util import add_train_summary, get_bn_decay, get_learning_rate
+from utils.io_util import shuffle_data, loadh5DataFile
+from utils.pc_util import rotate_point_cloud, jitter_point_cloud, random_point_dropout, \
+ random_scale_point_cloud, random_shift_point_cloud
+
+# from utils.transfer_pretrained_w import load_pretrained_var
+
+parser = argparse.ArgumentParser()
+
+''' === Basic Learning Settings === '''
+parser.add_argument('--gpu', type=int, default=1)
+parser.add_argument('--log_dir', default='log/log_cls/pointnet_cls')
+parser.add_argument('--model', default='pointnet_cls')
+parser.add_argument('--epoch', type=int, default=200)
+parser.add_argument('--restore', action='store_true')
+parser.add_argument('--restore_path', default='log/pointnet_cls')
+parser.add_argument('--batch_size', type=int, default=32)
+parser.add_argument('--num_point', type=int, default=1024)
+parser.add_argument('--base_lr', type=float, default=0.001)
+parser.add_argument('--lr_clip', type=float, default=1e-5)
+parser.add_argument('--decay_steps', type=int, default=20)
+parser.add_argument('--decay_rate', type=float, default=0.7)
+# parser.add_argument('--verbose', type=bool, default=True)
+parser.add_argument('--dataset', type=str, default='modelnet40')
+parser.add_argument('--partial', action='store_true')
+parser.add_argument('--filename', type=str, default='')
+parser.add_argument('--data_bn', action='store_true')
+
+''' === Data Augmentation Settings === '''
+parser.add_argument('--data_aug', action='store_true')
+parser.add_argument('--just_save', action='store_true') # pretrained encoder restoration
+parser.add_argument('--patience', type=int, default=200) # early stopping, set it as 200 for deprecation
+parser.add_argument('--fewshot', action='store_true')
+
+args = parser.parse_args()
+
+NUM_CLASSES, NUM_TRAINOBJECTS, TRAIN_FILES, VALID_FILES = Dataset_Assign(
+ dataset=args.dataset, fname=args.filename, partial=args.partial, bn=args.data_bn, few_shot=args.fewshot)
+
+BATCH_SIZE = args.batch_size
+NUM_POINT = args.num_point
+BASE_LR = args.base_lr
+LR_CLIP = args.lr_clip
+DECAY_RATE = args.decay_rate
+# DECAY_STEP = args.decay_steps
+DECAY_STEP = NUM_TRAINOBJECTS//BATCH_SIZE * args.decay_steps
+BN_INIT_DECAY = 0.5
+BN_DECAY_RATE = 0.5
+BN_DECAY_STEP = float(DECAY_STEP)
+BN_DECAY_CLIP = 0.99
+LOG_DIR = args.log_dir
+BEST_EVAL_ACC = 0
+os.system('mkdir -p %s' % LOG_DIR) if not os.path.exists(LOG_DIR) else None
+LOG_FOUT = open(os.path.join(LOG_DIR, 'log_train.txt'), 'a+')
+
+def log_string(out_str):
+ LOG_FOUT.write(out_str + '\n')
+ LOG_FOUT.flush()
+ print(out_str)
+
+
+def train(args):
+
+ log_string('\n\n' + '=' * 44)
+ log_string('Start Training, Time: %s' % datetime.datetime.now())
+ log_string('=' * 44 + '\n\n')
+
+ is_training_pl = tf.placeholder(tf.bool, shape=(), name='is_training')
+ global_step = tf.Variable(0, trainable=False, name='global_step') # will be used in defining train_op
+ inputs_pl = tf.placeholder(tf.float32, (1, None, 3), 'inputs')
+ labels_pl = tf.placeholder(tf.int32, (BATCH_SIZE,), 'labels')
+ npts_pl = tf.placeholder(tf.int32, (BATCH_SIZE,), 'num_points')
+
+ bn_decay = get_bn_decay(global_step, BN_INIT_DECAY, BATCH_SIZE, BN_DECAY_STEP, BN_DECAY_RATE, BN_DECAY_CLIP)
+
+ # model_module = importlib.import_module('.%s' % args.model, 'cls_models')
+ # MODEL = model_module.Model(inputs_pl, npts_pl, labels_pl, is_training_pl, bn_decay=bn_decay)
+ ''' === To fix issues when running on woma === '''
+ ldic = locals()
+ exec('from cls_models.%s import Model' % args.model, globals(), ldic)
+ MODEL = ldic['Model'](inputs_pl, npts_pl, labels_pl, is_training_pl, bn_decay=bn_decay)
+ pred, loss = MODEL.pred, MODEL.loss
+ tf.summary.scalar('loss', loss)
+
+ # useful information in displaying during training
+ correct = tf.equal(tf.argmax(pred, 1), tf.to_int64(labels_pl))
+ accuracy = tf.reduce_sum(tf.cast(correct, tf.float32)) / float(BATCH_SIZE)
+ tf.summary.scalar('accuracy', accuracy)
+
+ learning_rate = get_learning_rate(global_step, BASE_LR, BATCH_SIZE, DECAY_STEP, DECAY_RATE, LR_CLIP)
+ add_train_summary('learning_rate', learning_rate)
+ trainer = tf.train.AdamOptimizer(learning_rate)
+ train_op = trainer.minimize(MODEL.loss, global_step)
+ saver = tf.train.Saver()
+
+ config = tf.ConfigProto()
+ config.gpu_options.allow_growth = True
+ config.allow_soft_placement = True
+ # config.log_device_placement = True
+ sess = tf.Session(config=config)
+
+ merged = tf.summary.merge_all()
+ train_writer = tf.summary.FileWriter(os.path.join(LOG_DIR, 'train'), sess.graph)
+ val_writer = tf.summary.FileWriter(os.path.join(LOG_DIR, 'val'))
+
+ # Init variables
+ init = tf.global_variables_initializer()
+ log_string('\nModel Parameters has been Initialized\n')
+ sess.run(init, {is_training_pl: True}) # restore will cover the random initialized parameters
+
+ # to save the randomized variables
+ if not args.restore and args.just_save:
+ save_path = saver.save(sess, os.path.join(LOG_DIR, "model.ckpt"))
+ print(colored('random initialised model saved at %s' % save_path, 'white', 'on_blue'))
+ print(colored('just save the model, now exit', 'white', 'on_red'))
+ sys.exit()
+
+ '''current solution: first load pretrained head, assemble with output layers then save as a checkpoint'''
+ # to partially load the saved head from:
+ # if args.load_pretrained_head:
+ # sess.close()
+ # load_pretrained_head(args.pretrained_head_path, os.path.join(LOG_DIR, 'model.ckpt'), None, args.verbose)
+ # print('shared varibles have been restored from ', args.pretrained_head_path)
+ #
+ # sess = tf.Session(config=config)
+ # log_string('\nModel Parameters has been Initialized\n')
+ # sess.run(init, {is_training_pl: True})
+ # saver.restore(sess, tf.train.latest_checkpoint(LOG_DIR))
+ # log_string('\nModel Parameters have been restored with pretrained weights from %s' % args.pretrained_head_path)
+
+ if args.restore:
+ # load_pretrained_var(args.restore_path, os.path.join(LOG_DIR, "model.ckpt"), args.verbose)
+ saver.restore(sess, tf.train.latest_checkpoint(args.restore_path))
+ log_string('\n')
+ log_string(colored('Model Parameters have been restored from %s' % args.restore_path, 'white', 'on_red'))
+
+ for arg in sorted(vars(args)):
+ print(arg + ': ' + str(getattr(args, arg)) + '\n') # log of arguments
+ os.system('cp cls_models/%s.py %s' % (args.model, LOG_DIR)) # bkp of model def
+ os.system('cp train_cls.py %s' % LOG_DIR) # bkp of train procedure
+
+ train_start = time.time()
+
+ ops = {'pointclouds_pl': inputs_pl,
+ 'labels_pl': labels_pl,
+ 'is_training_pl': is_training_pl,
+ 'npts_pl': npts_pl,
+ 'pred': pred,
+ 'loss': loss,
+ 'train_op': train_op,
+ 'merged': merged,
+ 'step': global_step}
+
+ ESC = EarlyStoppingCriterion(patience=args.patience)
+
+ for epoch in range(args.epoch):
+ log_string('\n\n')
+ log_string(colored('**** EPOCH %03d ****' % epoch, 'grey', 'on_green'))
+ sys.stdout.flush()
+
+ '''=== training the model ==='''
+ train_one_epoch(sess, ops, train_writer)
+
+ '''=== evaluating the model ==='''
+ eval_mean_loss, eval_acc, eval_cls_acc = eval_one_epoch(sess, ops, val_writer)
+
+ '''=== check whether to early stop ==='''
+ early_stop, save_checkpoint = ESC.step(eval_acc, epoch=epoch)
+ if save_checkpoint:
+ save_path = saver.save(sess, os.path.join(LOG_DIR, "model.ckpt"))
+ log_string(colored('model saved at %s' % save_path, 'white', 'on_blue'))
+ if early_stop:
+ break
+
+ log_string('total time: %s' % datetime.timedelta(seconds=time.time() - train_start))
+ log_string('stop epoch: %d, best eval acc: %f' % (ESC.best_epoch, ESC.best_dev_score))
+ sess.close()
+
+
+def train_one_epoch(sess, ops, train_writer):
+ is_training = True
+
+ total_correct, total_seen, loss_sum = 0, 0, 0
+ train_file_idxs = np.arange(0, len(TRAIN_FILES))
+ np.random.shuffle(train_file_idxs)
+
+ for fn in range(len(TRAIN_FILES)):
+ current_data, current_label = loadh5DataFile(TRAIN_FILES[train_file_idxs[fn]])
+ current_data = current_data[:, :NUM_POINT, :]
+ current_data, current_label, _ = shuffle_data(current_data, np.squeeze(current_label))
+ current_label = np.squeeze(current_label)
+
+ file_size = current_data.shape[0]
+ num_batches = file_size // BATCH_SIZE
+
+ for batch_idx in range(num_batches):
+ start_idx = batch_idx * BATCH_SIZE
+ end_idx = (batch_idx + 1) * BATCH_SIZE
+ feed_data = current_data[start_idx:end_idx, :, :]
+
+ if args.data_aug:
+ feed_data = random_point_dropout(feed_data)
+ feed_data[:, :, 0:3] = random_scale_point_cloud(feed_data[:, :, 0:3])
+ feed_data[:, :, 0:3] = random_shift_point_cloud(feed_data[:, :, 0:3])
+
+ feed_dict = {
+ ops['pointclouds_pl']: feed_data.reshape([1, BATCH_SIZE * NUM_POINT, 3]),
+ ops['labels_pl']: current_label[start_idx:end_idx].reshape(BATCH_SIZE, ),
+ ops['npts_pl']: [NUM_POINT] * BATCH_SIZE,
+ ops['is_training_pl']: is_training}
+
+ summary, step, _, loss_val, pred_val = sess.run([
+ ops['merged'], ops['step'], ops['train_op'], ops['loss'], ops['pred']], feed_dict=feed_dict)
+ train_writer.add_summary(summary, step)
+
+ pred_val = np.argmax(pred_val, 1)
+ correct = np.sum(pred_val == current_label[start_idx:end_idx].reshape(BATCH_SIZE, ))
+ total_correct += correct
+ total_seen += BATCH_SIZE
+ loss_sum += loss_val
+
+ log_string('\n=== training ===')
+ log_string('total correct: %d, total_seen: %d' % (total_correct, total_seen))
+ # log_string('mean batch loss: %f' % (loss_sum / num_batches))
+ log_string('accuracy: %f' % (total_correct / float(total_seen)))
+
+
+def eval_one_epoch(sess, ops, val_writer):
+ is_training = False
+
+ total_correct, total_seen, loss_sum = 0, 0, 0
+ total_seen_class = [0 for _ in range(NUM_CLASSES)]
+ total_correct_class = [0 for _ in range(NUM_CLASSES)]
+
+ for fn in VALID_FILES:
+ current_data, current_label = loadh5DataFile(fn)
+ current_data = current_data[:, :NUM_POINT, :]
+ file_size = current_data.shape[0]
+ num_batches = file_size // BATCH_SIZE
+
+ for batch_idx in range(num_batches):
+ start_idx, end_idx = batch_idx * BATCH_SIZE, (batch_idx + 1) * BATCH_SIZE
+
+ feed_dict = {
+ ops['pointclouds_pl']: current_data[start_idx:end_idx, :, :].reshape([1, BATCH_SIZE * NUM_POINT, 3]),
+ ops['labels_pl']: current_label[start_idx:end_idx].reshape(BATCH_SIZE, ),
+ ops['npts_pl']: np.array([NUM_POINT] * BATCH_SIZE),
+ ops['is_training_pl']: is_training}
+
+ summary, step, loss_val, pred_val = sess.run(
+ [ops['merged'], ops['step'], ops['loss'], ops['pred']], feed_dict=feed_dict)
+ val_writer.add_summary(summary, step)
+ pred_val = np.argmax(pred_val, 1)
+ correct = np.sum(pred_val == current_label[start_idx:end_idx].reshape(BATCH_SIZE, ))
+ total_correct += correct
+ total_seen += BATCH_SIZE
+ loss_sum += (loss_val * BATCH_SIZE)
+
+ for i in range(start_idx, end_idx):
+ l = int(current_label.reshape(-1)[i])
+ total_seen_class[l] += 1
+ total_correct_class[l] += (pred_val[i - start_idx] == l)
+
+ eval_mean_loss = loss_sum / float(total_seen)
+ eval_acc = total_correct / float(total_seen)
+ eval_cls_acc = np.mean(np.array(total_correct_class) / np.array(total_seen_class, dtype=np.float))
+ log_string('\n=== evaluating ===')
+ log_string('total correct: %d, total_seen: %d' % (total_correct, total_seen))
+ log_string('eval mean loss: %f' % eval_mean_loss)
+ log_string('eval accuracy: %f' % eval_acc)
+ log_string('eval avg class acc: %f' % eval_cls_acc)
+
+ global BEST_EVAL_ACC
+ if eval_acc > BEST_EVAL_ACC:
+ BEST_EVAL_ACC = eval_acc
+ log_string('best eval accuracy: %f' % BEST_EVAL_ACC)
+ return eval_mean_loss, eval_acc, eval_cls_acc
+
+
+if __name__ == '__main__':
+ print('Now Using GPU:%d to train the model' % args.gpu)
+ os.environ['CUDA_DEVICE_ORDER'] = 'PCI_BUS_ID'
+ os.environ['CUDA_VISIBLE_DEVICES'] = str(args.gpu)
+
+ train(args)
+ LOG_FOUT.close()
diff --git a/zoo/OcCo/OcCo_TF/train_cls_dgcnn_torchloader.py b/zoo/OcCo/OcCo_TF/train_cls_dgcnn_torchloader.py
new file mode 100644
index 0000000..d282c2c
--- /dev/null
+++ b/zoo/OcCo/OcCo_TF/train_cls_dgcnn_torchloader.py
@@ -0,0 +1,236 @@
+# Copyright (c) 2020. Author: Hanchen Wang, hc.wang96@gmail.com
+# Ref: https://github.com/hansen7/NRS_3D/blob/master/train_dgcnn_cls.py
+import os, sys, pdb, shutil, argparse, numpy as np, tensorflow as tf
+from tqdm import tqdm
+from termcolor import colored
+from utils.Train_Logger import TrainLogger
+from utils.Dataset_Assign import Dataset_Assign
+# from utils.tf_util import get_bn_decay, get_lr_dgcnn
+# from utils.io_util import shuffle_data, loadh5DataFile
+# from utils.transfer_pretrained_w import load_pretrained_var
+from utils.pc_util import random_point_dropout, random_scale_point_cloud, random_shift_point_cloud
+from utils.ModelNetDataLoader import General_CLSDataLoader_HDF5
+from torch.utils.data import DataLoader
+
+def parse_args():
+ parser = argparse.ArgumentParser(description='DGCNN Point Cloud Recognition Training Configuration')
+
+ parser.add_argument('--gpu', type=str, default='0')
+ parser.add_argument('--log_dir', default='occo_dgcnn_cls')
+ parser.add_argument('--model', default='dgcnn_cls')
+ parser.add_argument('--epoch', type=int, default=250)
+ parser.add_argument('--restore', action='store_true')
+ parser.add_argument('--restore_path', type=str, default='')
+ parser.add_argument('--batch_size', type=int, default=24)
+ parser.add_argument('--num_points', type=int, default=1024)
+ parser.add_argument('--base_lr', type=float, default=0.001)
+ # parser.add_argument('--decay_steps', type=int, default=20)
+ # parser.add_argument('--decay_rate', type=float, default=0.7)
+ parser.add_argument('--momentum', type=float, default=0.9)
+
+ parser.add_argument('--dataset', type=str, default='modelnet40')
+ parser.add_argument('--filename', type=str, default='')
+ parser.add_argument('--data_bn', action='store_true')
+ parser.add_argument('--partial', action='store_true')
+ parser.add_argument('--data_aug', action='store_true')
+ parser.add_argument('--just_save', action='store_true') # use only in the pretrained encoder restoration
+ parser.add_argument('--fewshot', action='store_true')
+
+ return parser.parse_args()
+
+
+args = parse_args()
+
+DATA_PATH = 'data/modelnet40_normal_resampled/'
+NUM_CLASSES, NUM_TRAINOBJECTS, TRAIN_FILES, VALID_FILES = Dataset_Assign(
+ dataset=args.dataset, fname=args.filename, partial=args.partial, bn=args.data_bn, few_shot=args.fewshot)
+BATCH_SIZE, NUM_POINT = args.batch_size, args.num_points
+# DECAY_STEP = NUM_TRAINOBJECTS//BATCH_SIZE * args.decay_steps
+
+TRAIN_DATASET = General_CLSDataLoader_HDF5(file_list=TRAIN_FILES, num_point=1024)
+TEST_DATASET = General_CLSDataLoader_HDF5(file_list=VALID_FILES, num_point=1024)
+trainDataLoader = DataLoader(TRAIN_DATASET, batch_size=BATCH_SIZE, shuffle=True, num_workers=4, drop_last=True)
+testDataLoader = DataLoader(TEST_DATASET, batch_size=BATCH_SIZE, shuffle=False, num_workers=4, drop_last=True)
+# reduce the num_workers if the loaded data are huge, ref: https://github.com/pytorch/pytorch/issues/973
+
+def main(args):
+ MyLogger = TrainLogger(args, name=args.model.upper(), subfold='log_cls')
+ shutil.copy(os.path.join('cls_models', '%s.py' % args.model), MyLogger.log_dir)
+ shutil.copy(os.path.abspath(__file__), MyLogger.log_dir)
+
+ # is_training_pl -> to decide whether to apply batch normalisation
+ is_training_pl = tf.placeholder(tf.bool, shape=(), name='is_training')
+ global_step = tf.Variable(0, trainable=False, name='global_step')
+
+ inputs_pl = tf.placeholder(tf.float32, (1, None, 3), 'inputs')
+ labels_pl = tf.placeholder(tf.int32, (BATCH_SIZE,), 'labels')
+ npts_pl = tf.placeholder(tf.int32, (BATCH_SIZE,), 'num_points')
+
+ # bn_decay = get_bn_decay(batch=global_step, bn_init_decay=0.5, batch_size=args.batch_size,
+ # bn_decay_step=DECAY_STEP, bn_decay_rate=0.5, bn_decay_clip=0.99)
+
+ bn_decay = 0.9
+ # See "BatchNorm1d" in https://pytorch.org/docs/stable/nn.html
+ ''' === fix issues of importlib when running on some servers (i.e., woma) === '''
+ # model_module = importlib.import_module('.%s' % args.model_type, 'cls_models')
+ # MODEL = model_module.Model(inputs_pl, npts_pl, labels_pl, is_training_pl, bn_decay=bn_decay)
+ ldic = locals()
+ exec('from cls_models.%s import Model' % args.model, globals(), ldic)
+ MODEL = ldic['Model'](inputs_pl, npts_pl, labels_pl, is_training_pl, bn_decay=bn_decay)
+ pred, loss = MODEL.pred, MODEL.loss
+ tf.summary.scalar('loss', loss)
+
+ correct = tf.equal(tf.argmax(pred, 1), tf.to_int64(labels_pl))
+ accuracy = tf.reduce_sum(tf.cast(correct, tf.float32)) / float(args.batch_size)
+ tf.summary.scalar('accuracy', accuracy)
+
+ ''' === Learning Rate === '''
+ def get_lr_dgcnn(args, global_step, alpha):
+ learning_rate = tf.train.cosine_decay(
+ learning_rate=100 * args.base_lr, # Base Learning Rate, 0.1
+ global_step=global_step, # Training Step Index
+ decay_steps=NUM_TRAINOBJECTS//BATCH_SIZE * args.epoch, # Total Training Step
+ alpha=alpha # Fraction of the Minimum Value of the Set lr
+ )
+ # learning_rate = tf.maximum(learning_rate, args.base_lr)
+ return learning_rate
+
+ learning_rate = get_lr_dgcnn(args, global_step, alpha=0.01)
+ tf.summary.scalar('learning rate', learning_rate)
+ # scheduler = torch.optim.lr_scheduler.CosineAnnealingLR(opt, args.epoch, eta_min=args.lr)
+ # doc: https://pytorch.org/docs/stable/optim.html
+ # doc: https://www.tensorflow.org/api_docs/python/tf/compat/v1/train/cosine_decay
+
+ ''' === Optimiser === '''
+ # trainer = tf.train.GradientDescentOptimizer(learning_rate)
+ trainer = tf.train.MomentumOptimizer(learning_rate, momentum=args.momentum)
+ # equivalent to torch.optim.SGD
+ # doc: https://www.tensorflow.org/api_docs/python/tf/compat/v1/train/MomentumOptimizer
+ # another alternative is to use keras
+ # trainer = tf.keras.optimizers.SGD(learning_rate, momentum=args.momentum)
+ # doc: https://www.tensorflow.org/versions/r1.15/api_docs/python/tf/keras/optimizers/SGD
+ # opt = torch.optim.SGD(model.parameters(), lr=args.lr * 100, momentum=args.momentum, weight_decay=1e-4)
+
+ train_op = trainer.minimize(loss=MODEL.loss, global_step=global_step)
+ saver = tf.train.Saver()
+
+ # ref: https://github.com/tensorflow/tensorflow/blob/master/tensorflow/core/protobuf/config.proto
+ config = tf.ConfigProto()
+ # config.gpu_options.allow_growth = True
+ # config.allow_soft_placement = True # Uncomment it if GPU option is not available
+ # config.log_device_placement = True # Uncomment it if you want device placements to be logged
+ sess = tf.Session(config=config)
+
+ merged = tf.summary.merge_all()
+ train_writer = tf.summary.FileWriter(os.path.join(MyLogger.experiment_dir, 'runs', 'train'), sess.graph)
+ val_writer = tf.summary.FileWriter(os.path.join(MyLogger.experiment_dir, 'runs', 'valid'), sess.graph)
+
+ # Initialise all the variables of the models
+ init = tf.global_variables_initializer()
+
+ sess.run(init, {is_training_pl: True})
+
+ # to save the randomized initialised models then exit
+ if args.just_save:
+ save_path = saver.save(sess, os.path.join(MyLogger.checkpoints_dir, "model.ckpt"))
+ print(colored('random initialised model saved at %s' % save_path, 'white', 'on_blue'))
+ print(colored('just save the model, now exit', 'white', 'on_red'))
+ sys.exit()
+
+ '''current solution: first load pretrained encoder,
+ assemble with randomly initialised FC layers then save to the checkpoint'''
+
+ if args.restore:
+ saver.restore(sess, tf.train.latest_checkpoint(args.restore_path))
+ MyLogger.logger.info('Model Parameters has been Restored')
+
+ ops = {'pointclouds_pl': inputs_pl,
+ 'labels_pl': labels_pl,
+ 'is_training_pl': is_training_pl,
+ 'npts_pl': npts_pl,
+ 'pred': pred,
+ 'loss': loss,
+ 'train_op': train_op,
+ 'merged': merged,
+ 'step': global_step}
+
+ for epoch in range(args.epoch):
+
+ '''=== training the model ==='''
+ train_one_epoch(sess, ops, MyLogger, train_writer)
+
+ '''=== evaluating the model ==='''
+ save_checkpoint = eval_one_epoch(sess, ops, MyLogger, val_writer)
+
+ '''=== check whether to store the checkpoints ==='''
+ if save_checkpoint:
+ save_path = saver.save(sess, os.path.join(MyLogger.savepath, "model.ckpt"))
+ MyLogger.logger.info('model saved at %s' % MyLogger.savepath)
+
+ sess.close()
+ MyLogger.train_summary()
+
+
+def train_one_epoch(sess, ops, MyLogger, train_writer):
+ is_training = True
+ MyLogger.epoch_init(training=is_training)
+
+ for points, target in tqdm(trainDataLoader, total=len(trainDataLoader), smoothing=0.9):
+ # pdb.set_trace()
+ points, target = points.numpy(), target.numpy()
+
+ if args.data_aug:
+ points = random_point_dropout(points)
+ points[:, :, 0:3] = random_scale_point_cloud(points[:, :, 0:3])
+ points[:, :, 0:3] = random_shift_point_cloud(points[:, :, 0:3])
+
+ feed_dict = {
+ ops['pointclouds_pl']: points.reshape([1, BATCH_SIZE * NUM_POINT, 3]),
+ ops['labels_pl']: target.reshape(BATCH_SIZE, ),
+ ops['npts_pl']: [NUM_POINT] * BATCH_SIZE,
+ ops['is_training_pl']: is_training}
+
+ summary, step, _, loss, pred = sess.run([
+ ops['merged'], ops['step'], ops['train_op'], ops['loss'], ops['pred']], feed_dict=feed_dict)
+ train_writer.add_summary(summary, step)
+
+ # pdb.set_trace()
+ MyLogger.step_update(np.argmax(pred, 1), target.reshape(BATCH_SIZE, ), loss)
+
+ MyLogger.epoch_summary(writer=None, training=is_training)
+
+ return None
+
+
+def eval_one_epoch(sess, ops, MyLogger, val_writer):
+ is_training = False
+ MyLogger.epoch_init(training=is_training)
+
+ for points, target in tqdm(testDataLoader, total=len(testDataLoader), smoothing=0.9):
+ # pdb.set_trace()
+ points, target = points.numpy(), target.numpy()
+
+ feed_dict = {
+ ops['pointclouds_pl']: points.reshape([1, BATCH_SIZE * NUM_POINT, 3]),
+ ops['labels_pl']: target.reshape(BATCH_SIZE, ),
+ ops['npts_pl']: np.array([NUM_POINT] * BATCH_SIZE),
+ ops['is_training_pl']: is_training}
+
+ summary, step, loss_val, pred_val = sess.run(
+ [ops['merged'], ops['step'], ops['loss'], ops['pred']], feed_dict=feed_dict)
+ val_writer.add_summary(summary, step)
+ # pdb.set_trace()
+ MyLogger.step_update(np.argmax(pred_val, 1), target.reshape(BATCH_SIZE, ), loss_val)
+
+ MyLogger.epoch_summary(writer=None, training=is_training)
+
+ return MyLogger.save_model
+
+
+if __name__ == '__main__':
+
+ print('Now Using GPU:%s to train the model' % args.gpu)
+ os.environ['CUDA_DEVICE_ORDER'] = 'PCI_BUS_ID'
+ os.environ['CUDA_VISIBLE_DEVICES'] = args.gpu
+
+ main(args)
diff --git a/zoo/OcCo/OcCo_TF/train_cls_torchloader.py b/zoo/OcCo/OcCo_TF/train_cls_torchloader.py
new file mode 100644
index 0000000..73a99a2
--- /dev/null
+++ b/zoo/OcCo/OcCo_TF/train_cls_torchloader.py
@@ -0,0 +1,351 @@
+# Copyright (c) 2020. Author: Hanchen Wang, hc.wang96@gmail.com
+
+import os, sys, pdb, time, argparse, datetime, importlib, numpy as np, tensorflow as tf
+from tqdm import tqdm
+from termcolor import colored
+from utils.Dataset_Assign import Dataset_Assign
+from utils.io_util import shuffle_data, loadh5DataFile
+from utils.EarlyStoppingCriterion import EarlyStoppingCriterion
+from utils.tf_util import add_train_summary, get_bn_decay, get_learning_rate
+from utils.pc_util import rotate_point_cloud, jitter_point_cloud, random_point_dropout, \
+ random_scale_point_cloud, random_shift_point_cloud
+
+# from utils.transfer_pretrained_w import load_pretrained_var
+from utils.ModelNetDataLoader import General_CLSDataLoader_HDF5
+from torch.utils.data import DataLoader
+
+parser = argparse.ArgumentParser()
+
+''' === Basic Learning Settings === '''
+parser.add_argument('--gpu', type=int, default=1)
+parser.add_argument('--log_dir', default='log/log_cls/pointnet_cls')
+parser.add_argument('--model', default='pointnet_cls')
+parser.add_argument('--epoch', type=int, default=200)
+parser.add_argument('--restore', action='store_true')
+parser.add_argument('--restore_path', default='log/pointnet_cls')
+parser.add_argument('--batch_size', type=int, default=32)
+parser.add_argument('--num_point', type=int, default=1024)
+parser.add_argument('--base_lr', type=float, default=0.001)
+parser.add_argument('--lr_clip', type=float, default=1e-5)
+parser.add_argument('--decay_steps', type=int, default=20)
+parser.add_argument('--decay_rate', type=float, default=0.7)
+# parser.add_argument('--verbose', type=bool, default=True)
+parser.add_argument('--dataset', type=str, default='modelnet40')
+parser.add_argument('--partial', action='store_true')
+parser.add_argument('--filename', type=str, default='')
+parser.add_argument('--data_bn', action='store_true')
+
+''' === Data Augmentation Settings === '''
+parser.add_argument('--data_aug', action='store_true')
+parser.add_argument('--just_save', action='store_true') # pretrained encoder restoration
+parser.add_argument('--patience', type=int, default=200) # early stopping, set it as 200 for deprecation
+parser.add_argument('--fewshot', action='store_true')
+
+args = parser.parse_args()
+
+DATA_PATH = 'data/modelnet40_normal_resampled/'
+NUM_CLASSES, NUM_TRAINOBJECTS, TRAIN_FILES, VALID_FILES = Dataset_Assign(
+ dataset=args.dataset, fname=args.filename, partial=args.partial, bn=args.data_bn, few_shot=args.fewshot)
+TRAIN_DATASET = General_CLSDataLoader_HDF5(root=DATA_PATH, file_list=TRAIN_FILES, num_point=1024)
+TEST_DATASET = General_CLSDataLoader_HDF5(root=DATA_PATH, file_list=VALID_FILES, num_point=1024)
+trainDataLoader = DataLoader(TRAIN_DATASET, batch_size=args.batch_size, shuffle=True, num_workers=4, drop_last=True)
+testDataLoader = DataLoader(TEST_DATASET, batch_size=args.batch_size, shuffle=False, num_workers=4, drop_last=True)
+
+BATCH_SIZE = args.batch_size
+NUM_POINT = args.num_point
+BASE_LR = args.base_lr
+LR_CLIP = args.lr_clip
+DECAY_RATE = args.decay_rate
+# DECAY_STEP = args.decay_steps
+DECAY_STEP = NUM_TRAINOBJECTS//BATCH_SIZE * args.decay_steps
+BN_INIT_DECAY = 0.5
+BN_DECAY_RATE = 0.5
+BN_DECAY_STEP = float(DECAY_STEP)
+BN_DECAY_CLIP = 0.99
+LOG_DIR = args.log_dir
+BEST_EVAL_ACC = 0
+os.system('mkdir -p %s' % LOG_DIR) if not os.path.exists(LOG_DIR) else None
+LOG_FOUT = open(os.path.join(LOG_DIR, 'log_train.txt'), 'a+')
+
+def log_string(out_str):
+ LOG_FOUT.write(out_str + '\n')
+ LOG_FOUT.flush()
+ print(out_str)
+
+
+def train(args):
+
+ log_string('\n\n' + '=' * 50)
+ log_string('Start Training, Time: %s' % datetime.datetime.now())
+ log_string('=' * 50 + '\n\n')
+
+ is_training_pl = tf.placeholder(tf.bool, shape=(), name='is_training')
+ global_step = tf.Variable(0, trainable=False, name='global_step') # will be used in defining train_op
+ inputs_pl = tf.placeholder(tf.float32, (1, None, 3), 'inputs')
+ labels_pl = tf.placeholder(tf.int32, (BATCH_SIZE,), 'labels')
+ npts_pl = tf.placeholder(tf.int32, (BATCH_SIZE,), 'num_points')
+
+ bn_decay = get_bn_decay(global_step, BN_INIT_DECAY, BATCH_SIZE, BN_DECAY_STEP, BN_DECAY_RATE, BN_DECAY_CLIP)
+
+ # model_module = importlib.import_module('.%s' % args.model, 'cls_models')
+ # MODEL = model_module.Model(inputs_pl, npts_pl, labels_pl, is_training_pl, bn_decay=bn_decay)
+ ''' === To fix issues when running on woma === '''
+ ldic = locals()
+ exec('from cls_models.%s import Model' % args.model, globals(), ldic)
+ MODEL = ldic['Model'](inputs_pl, npts_pl, labels_pl, is_training_pl, bn_decay=bn_decay)
+ pred, loss = MODEL.pred, MODEL.loss
+ tf.summary.scalar('loss', loss)
+ # pdb.set_trace()
+
+ # useful information in displaying during training
+ correct = tf.equal(tf.argmax(pred, 1), tf.to_int64(labels_pl))
+ accuracy = tf.reduce_sum(tf.cast(correct, tf.float32)) / float(BATCH_SIZE)
+ tf.summary.scalar('accuracy', accuracy)
+
+ learning_rate = get_learning_rate(global_step, BASE_LR, BATCH_SIZE, DECAY_STEP, DECAY_RATE, LR_CLIP)
+ add_train_summary('learning_rate', learning_rate)
+ trainer = tf.train.AdamOptimizer(learning_rate)
+ train_op = trainer.minimize(MODEL.loss, global_step)
+ saver = tf.train.Saver()
+
+ config = tf.ConfigProto()
+ config.gpu_options.allow_growth = True
+ config.allow_soft_placement = True
+ # config.log_device_placement = True
+ sess = tf.Session(config=config)
+
+ merged = tf.summary.merge_all()
+ train_writer = tf.summary.FileWriter(os.path.join(LOG_DIR, 'train'), sess.graph)
+ val_writer = tf.summary.FileWriter(os.path.join(LOG_DIR, 'val'))
+
+ # Init variables
+ init = tf.global_variables_initializer()
+ log_string('\nModel Parameters has been Initialized\n')
+ sess.run(init, {is_training_pl: True}) # restore will cover the random initialized parameters
+
+ # to save the randomized variables
+ if not args.restore and args.just_save:
+ save_path = saver.save(sess, os.path.join(LOG_DIR, "model.ckpt"))
+ print(colored('random initialised model saved at %s' % save_path, 'white', 'on_blue'))
+ print(colored('just save the model, now exit', 'white', 'on_red'))
+ sys.exit()
+
+ '''current solution: first load pretrained head, assemble with output layers then save as a checkpoint'''
+ # to partially load the saved head from:
+ # if args.load_pretrained_head:
+ # sess.close()
+ # load_pretrained_head(args.pretrained_head_path, os.path.join(LOG_DIR, 'model.ckpt'), None, args.verbose)
+ # print('shared varibles have been restored from ', args.pretrained_head_path)
+ #
+ # sess = tf.Session(config=config)
+ # log_string('\nModel Parameters has been Initialized\n')
+ # sess.run(init, {is_training_pl: True})
+ # saver.restore(sess, tf.train.latest_checkpoint(LOG_DIR))
+ # log_string('\nModel Parameters have been restored with pretrained weights from %s' % args.pretrained_head_path)
+
+ if args.restore:
+ # load_pretrained_var(args.restore_path, os.path.join(LOG_DIR, "model.ckpt"), args.verbose)
+ saver.restore(sess, tf.train.latest_checkpoint(args.restore_path))
+ log_string('\n')
+ log_string(colored('Model Parameters have been restored from %s' % args.restore_path, 'white', 'on_red'))
+
+ for arg in sorted(vars(args)):
+ print(arg + ': ' + str(getattr(args, arg)) + '\n') # log of arguments
+ os.system('cp cls_models/%s.py %s' % (args.model, LOG_DIR)) # bkp of model def
+ os.system('cp train_cls.py %s' % LOG_DIR) # bkp of train procedure
+
+ train_start = time.time()
+
+ ops = {'pointclouds_pl': inputs_pl,
+ 'labels_pl': labels_pl,
+ 'is_training_pl': is_training_pl,
+ 'npts_pl': npts_pl,
+ 'pred': pred,
+ 'loss': loss,
+ 'train_op': train_op,
+ 'merged': merged,
+ 'step': global_step}
+
+ ESC = EarlyStoppingCriterion(patience=args.patience)
+
+ for epoch in range(args.epoch):
+ log_string('\n\n')
+ log_string(colored('**** EPOCH %03d ****' % epoch, 'grey', 'on_green'))
+ sys.stdout.flush()
+
+ '''=== training the model ==='''
+ train_one_epoch(sess, ops, train_writer)
+
+ '''=== evaluating the model ==='''
+ eval_mean_loss, eval_acc, eval_cls_acc = eval_one_epoch(sess, ops, val_writer)
+
+ '''=== check whether to early stop ==='''
+ early_stop, save_checkpoint = ESC.step(eval_acc, epoch=epoch)
+ if save_checkpoint:
+ save_path = saver.save(sess, os.path.join(LOG_DIR, "model.ckpt"))
+ log_string(colored('model saved at %s' % save_path, 'white', 'on_blue'))
+ if early_stop:
+ break
+
+ log_string('total time: %s' % datetime.timedelta(seconds=time.time() - train_start))
+ log_string('stop epoch: %d, best eval acc: %f' % (ESC.best_epoch + 1, ESC.best_dev_score))
+ sess.close()
+
+
+def train_one_epoch(sess, ops, train_writer):
+ is_training = True
+ total_correct, total_seen, loss_sum = 0, 0, 0
+
+ for points, target in tqdm(trainDataLoader, total=len(trainDataLoader), smoothing=0.9):
+ # pdb.set_trace()
+ points, target = points.numpy(), target.numpy()
+
+ if args.data_aug:
+ points = random_point_dropout(points)
+ points[:, :, 0:3] = random_scale_point_cloud(points[:, :, 0:3])
+ points[:, :, 0:3] = random_shift_point_cloud(points[:, :, 0:3])
+
+ feed_dict = {
+ ops['pointclouds_pl']: points.reshape([1, BATCH_SIZE * NUM_POINT, 3]),
+ ops['labels_pl']: target.reshape(BATCH_SIZE, ),
+ ops['npts_pl']: [NUM_POINT] * BATCH_SIZE,
+ ops['is_training_pl']: is_training}
+
+ summary, step, _, loss_val, pred_val = sess.run([
+ ops['merged'], ops['step'], ops['train_op'], ops['loss'], ops['pred']], feed_dict=feed_dict)
+ train_writer.add_summary(summary, step)
+
+ pred_val = np.argmax(pred_val, 1)
+ correct = np.sum(pred_val == target.reshape(BATCH_SIZE, ))
+ total_correct += correct
+ total_seen += BATCH_SIZE
+ # loss_sum += loss_val
+
+ # train_file_idxs = np.arange(0, len(TRAIN_FILES))
+ # np.random.shuffle(train_file_idxs)
+ #
+ # for fn in range(len(TRAIN_FILES)):
+ # current_data, current_label = loadh5DataFile(TRAIN_FILES[train_file_idxs[fn]])
+ # current_data = current_data[:, :NUM_POINT, :]
+ # current_data, current_label, _ = shuffle_data(current_data, np.squeeze(current_label))
+ # current_label = np.squeeze(current_label)
+ #
+ # file_size = current_data.shape[0]
+ # num_batches = file_size // BATCH_SIZE
+ #
+ # for batch_idx in range(num_batches):
+ # start_idx = batch_idx * BATCH_SIZE
+ # end_idx = (batch_idx + 1) * BATCH_SIZE
+ # feed_data = current_data[start_idx:end_idx, :, :]
+ #
+ # if args.data_aug:
+ # feed_data = random_point_dropout(feed_data)
+ # feed_data[:, :, 0:3] = random_scale_point_cloud(feed_data[:, :, 0:3])
+ # feed_data[:, :, 0:3] = random_shift_point_cloud(feed_data[:, :, 0:3])
+ #
+ # feed_dict = {
+ # ops['pointclouds_pl']: feed_data.reshape([1, BATCH_SIZE * NUM_POINT, 3]),
+ # ops['labels_pl']: current_label[start_idx:end_idx].reshape(BATCH_SIZE, ),
+ # ops['npts_pl']: [NUM_POINT] * BATCH_SIZE,
+ # ops['is_training_pl']: is_training}
+ #
+ # summary, step, _, loss_val, pred_val = sess.run([
+ # ops['merged'], ops['step'], ops['train_op'], ops['loss'], ops['pred']], feed_dict=feed_dict)
+ # train_writer.add_summary(summary, step)
+ #
+ # pred_val = np.argmax(pred_val, 1)
+ # correct = np.sum(pred_val == current_label[start_idx:end_idx].reshape(BATCH_SIZE, ))
+ # total_correct += correct
+ # total_seen += BATCH_SIZE
+ # loss_sum += loss_val
+
+ log_string('\n=== training ===')
+ log_string('total correct: %d, total_seen: %d' % (total_correct, total_seen))
+ # log_string('mean batch loss: %f' % (loss_sum / num_batches))
+ log_string('accuracy: %f' % (total_correct / float(total_seen)))
+
+
+def eval_one_epoch(sess, ops, val_writer):
+ is_training = False
+
+ total_correct, total_seen, loss_sum = 0, 0, 0
+ total_seen_class = [0 for _ in range(NUM_CLASSES)]
+ total_correct_class = [0 for _ in range(NUM_CLASSES)]
+
+ for points, target in tqdm(testDataLoader, total=len(testDataLoader), smoothing=0.9):
+ # pdb.set_trace()
+ points, target = points.numpy(), target.numpy()
+
+ feed_dict = {
+ ops['pointclouds_pl']: points.reshape([1, BATCH_SIZE * NUM_POINT, 3]),
+ ops['labels_pl']: target.reshape(BATCH_SIZE, ),
+ ops['npts_pl']: np.array([NUM_POINT] * BATCH_SIZE),
+ ops['is_training_pl']: is_training}
+
+ summary, step, loss_val, pred_val = sess.run(
+ [ops['merged'], ops['step'], ops['loss'], ops['pred']], feed_dict=feed_dict)
+ val_writer.add_summary(summary, step)
+ pred_val = np.argmax(pred_val, 1)
+ correct = np.sum(pred_val == target.reshape(BATCH_SIZE, ))
+ total_correct += correct
+ total_seen += BATCH_SIZE
+ loss_sum += (loss_val * BATCH_SIZE)
+
+ for i, l in enumerate(target):
+ # l = int(target.reshape(-1)[i])
+ # pdb.set_trace()
+ total_seen_class[int(l)] += 1
+ total_correct_class[int(l)] += (int(pred_val[i]) == int(l))
+
+ # for fn in VALID_FILES:
+ # current_data, current_label = loadh5DataFile(fn)
+ # current_data = current_data[:, :NUM_POINT, :]
+ # file_size = current_data.shape[0]
+ # num_batches = file_size // BATCH_SIZE
+ #
+ # for batch_idx in range(num_batches):
+ # start_idx, end_idx = batch_idx * BATCH_SIZE, (batch_idx + 1) * BATCH_SIZE
+ #
+ # feed_dict = {
+ # ops['pointclouds_pl']: current_data[start_idx:end_idx, :, :].reshape([1, BATCH_SIZE * NUM_POINT, 3]),
+ # ops['labels_pl']: current_label[start_idx:end_idx].reshape(BATCH_SIZE, ),
+ # ops['npts_pl']: np.array([NUM_POINT] * BATCH_SIZE),
+ # ops['is_training_pl']: is_training}
+ #
+ # summary, step, loss_val, pred_val = sess.run(
+ # [ops['merged'], ops['step'], ops['loss'], ops['pred']], feed_dict=feed_dict)
+ # val_writer.add_summary(summary, step)
+ # pred_val = np.argmax(pred_val, 1)
+ # correct = np.sum(pred_val == current_label[start_idx:end_idx].reshape(BATCH_SIZE, ))
+ # total_correct += correct
+ # total_seen += BATCH_SIZE
+ # loss_sum += (loss_val * BATCH_SIZE)
+ #
+ # for i in range(start_idx, end_idx):
+ # l = int(current_label.reshape(-1)[i])
+ # total_seen_class[l] += 1
+ # total_correct_class[l] += (pred_val[i - start_idx] == l)
+
+ eval_mean_loss = loss_sum / float(total_seen)
+ eval_acc = total_correct / float(total_seen)
+ eval_cls_acc = np.mean(np.array(total_correct_class) / np.array(total_seen_class, dtype=np.float))
+ log_string('\n=== evaluating ===')
+ log_string('total correct: %d, total_seen: %d' % (total_correct, total_seen))
+ log_string('eval mean loss: %f' % eval_mean_loss)
+ log_string('eval accuracy: %f' % eval_acc)
+ log_string('eval avg class acc: %f' % eval_cls_acc)
+
+ global BEST_EVAL_ACC
+ if eval_acc > BEST_EVAL_ACC:
+ BEST_EVAL_ACC = eval_acc
+ log_string('best eval accuracy: %f' % BEST_EVAL_ACC)
+ return eval_mean_loss, eval_acc, eval_cls_acc
+
+
+if __name__ == '__main__':
+ print('Now Using GPU:%d to train the model' % args.gpu)
+ os.environ['CUDA_DEVICE_ORDER'] = 'PCI_BUS_ID'
+ os.environ['CUDA_VISIBLE_DEVICES'] = str(args.gpu)
+
+ train(args)
+ LOG_FOUT.close()
diff --git a/zoo/OcCo/OcCo_TF/train_completion.py b/zoo/OcCo/OcCo_TF/train_completion.py
new file mode 100644
index 0000000..5f381fa
--- /dev/null
+++ b/zoo/OcCo/OcCo_TF/train_completion.py
@@ -0,0 +1,237 @@
+# Copyright (c) 2020. Hanchen Wang, hw501@cam.ac.uk
+
+import os, pdb, time, argparse, datetime, importlib, numpy as np, tensorflow as tf
+from utils import lmdb_dataflow, add_train_summary, plot_pcd_three_views
+from termcolor import colored
+
+
+parser = argparse.ArgumentParser()
+parser.add_argument('--gpu', type=str, default='1')
+parser.add_argument('--lmdb_train', default='data/modelnet/train.lmdb')
+parser.add_argument('--lmdb_valid', default='data/modelnet/test.lmdb')
+parser.add_argument('--log_dir', type=str, default='')
+parser.add_argument('--model_type', default='pcn_cd')
+parser.add_argument('--restore', action='store_true')
+parser.add_argument('--restore_path', default='log/pcn_cd')
+parser.add_argument('--batch_size', type=int, default=16)
+parser.add_argument('--num_gt_points', type=int, default=16384)
+parser.add_argument('--base_lr', type=float, default=1e-4)
+parser.add_argument('--lr_decay', action='store_true')
+parser.add_argument('--lr_decay_steps', type=int, default=50000)
+parser.add_argument('--lr_decay_rate', type=float, default=0.7)
+parser.add_argument('--lr_clip', type=float, default=1e-6)
+parser.add_argument('--max_step', type=int, default=3000000)
+parser.add_argument('--epoch', type=int, default=50)
+parser.add_argument('--steps_per_print', type=int, default=100)
+parser.add_argument('--steps_per_eval', type=int, default=1000)
+parser.add_argument('--steps_per_visu', type=int, default=3456)
+parser.add_argument('--epochs_per_save', type=int, default=5)
+parser.add_argument('--visu_freq', type=int, default=10)
+parser.add_argument('--store_grad', action='store_true')
+parser.add_argument('--num_input_points', type=int, default=1024)
+parser.add_argument('--dataset', default='modelnet40')
+
+args = parser.parse_args()
+
+BATCH_SIZE = args.batch_size
+NUM_POINT = args.num_input_points
+NUM_GT_POINT = args.num_gt_points
+DECAY_STEP = args.lr_decay_steps
+DATASET = args.dataset
+
+BN_INIT_DECAY = 0.5
+BN_DECAY_DECAY_RATE = 0.5
+BN_DECAY_DECAY_STEP = float(DECAY_STEP)
+BN_DECAY_CLIP = 0.99
+
+
+def get_bn_decay(batch):
+ bn_momentum = tf.train.exponential_decay(
+ BN_INIT_DECAY,
+ batch * BATCH_SIZE,
+ BN_DECAY_DECAY_STEP,
+ BN_DECAY_DECAY_RATE,
+ staircase=True)
+ bn_decay = tf.minimum(BN_DECAY_CLIP, 1 - bn_momentum)
+ return bn_decay
+
+
+def vary2fix(inputs, npts):
+ inputs_ls = np.split(inputs[0], npts.cumsum())
+ ret_inputs = np.zeros((1, BATCH_SIZE * NUM_POINT, 3), dtype=np.float32)
+ ret_npts = npts.copy()
+ for idx, obj in enumerate(inputs_ls[:-1]):
+ if len(obj) <= NUM_POINT:
+ select_idx = np.concatenate([
+ np.arange(len(obj)), np.random.choice(len(obj), NUM_POINT - len(obj))])
+ else:
+ select_idx = np.arange(len(obj))
+ np.random.shuffle(select_idx)
+ pdb.set_trace()
+
+ ret_inputs[0][idx * NUM_POINT:(idx + 1) * NUM_POINT] = obj[select_idx].copy()
+ ret_npts[idx] = NUM_POINT
+ return ret_inputs, ret_npts
+
+
+def train(args):
+
+ is_training_pl = tf.placeholder(tf.bool, shape=(), name='is_training')
+ global_step = tf.Variable(0, trainable=False, name='global_step')
+ alpha = tf.train.piecewise_constant(global_step, [10000, 20000, 50000],
+ [0.01, 0.1, 0.5, 1.0], 'alpha_op')
+
+ # for ModelNet, it is with Fixed Number of Input Points
+ # for ShapeNet, it is with Varying Number of Input Points
+ inputs_pl = tf.placeholder(tf.float32, (1, BATCH_SIZE * NUM_POINT, 3), 'inputs')
+ npts_pl = tf.placeholder(tf.int32, (BATCH_SIZE,), 'num_points')
+ gt_pl = tf.placeholder(tf.float32, (BATCH_SIZE, args.num_gt_points, 3), 'ground_truths')
+ add_train_summary('alpha', alpha)
+ bn_decay = get_bn_decay(global_step)
+ add_train_summary('bn_decay', bn_decay)
+
+ model_module = importlib.import_module('.%s' % args.model_type, 'completion_models')
+ model = model_module.Model(inputs_pl, npts_pl, gt_pl, alpha,
+ bn_decay=bn_decay, is_training=is_training_pl)
+
+ # Another Solution instead of importlib:
+ # ldic = locals()
+ # exec('from completion_models.%s import Model' % args.model_type, globals(), ldic)
+ # model = ldic['Model'](inputs_pl, npts_pl, gt_pl, alpha,
+ # bn_decay=bn_decay, is_training=is_training_pl)
+
+ if args.lr_decay:
+ learning_rate = tf.train.exponential_decay(args.base_lr, global_step,
+ args.lr_decay_steps, args.lr_decay_rate,
+ staircase=True, name='lr')
+ learning_rate = tf.maximum(learning_rate, args.lr_clip)
+ add_train_summary('learning_rate', learning_rate)
+ else:
+ learning_rate = tf.constant(args.base_lr, name='lr')
+
+ trainer = tf.train.AdamOptimizer(learning_rate)
+ train_op = trainer.minimize(model.loss, global_step)
+ saver = tf.train.Saver(max_to_keep=10)
+ ''' from PCN paper:
+ All our completion_models are trained using the Adam optimizer
+ with an initial learning rate of 0.0001 for 50 epochs
+ and a batch size of 32. The learning rate is decayed by 0.7 every 50K iterations.
+ '''
+
+ if args.store_grad:
+ grads_and_vars = trainer.compute_gradients(model.loss)
+ for g, v in grads_and_vars:
+ tf.summary.histogram(v.name, v, collections=['train_summary'])
+ tf.summary.histogram(v.name + '_grad', g, collections=['train_summary'])
+
+ train_summary = tf.summary.merge_all('train_summary')
+ valid_summary = tf.summary.merge_all('valid_summary')
+
+ # the input number of points for the partial observed data is not a fixed number
+ df_train, num_train = lmdb_dataflow(
+ args.lmdb_train, args.batch_size,
+ args.num_input_points, args.num_gt_points, is_training=True)
+ train_gen = df_train.get_data()
+ df_valid, num_valid = lmdb_dataflow(
+ args.lmdb_valid, args.batch_size,
+ args.num_input_points, args.num_gt_points, is_training=False)
+ valid_gen = df_valid.get_data()
+
+ config = tf.ConfigProto()
+ config.gpu_options.allow_growth = True
+ config.allow_soft_placement = True
+ sess = tf.Session(config=config)
+
+ if args.restore:
+ saver.restore(sess, tf.train.latest_checkpoint(args.log_dir))
+ writer = tf.summary.FileWriter(args.log_dir)
+ else:
+ sess.run(tf.global_variables_initializer())
+ if os.path.exists(args.log_dir):
+ delete_key = input(colored('%s exists. Delete? [y/n]' % args.log_dir, 'white', 'on_red'))
+ if delete_key == 'y' or delete_key == "yes":
+ os.system('rm -rf %s/*' % args.log_dir)
+ os.makedirs(os.path.join(args.log_dir, 'plots'))
+ else:
+ os.makedirs(os.path.join(args.log_dir, 'plots'))
+ with open(os.path.join(args.log_dir, 'args.txt'), 'w') as log:
+ for arg in sorted(vars(args)):
+ log.write(arg + ': ' + str(getattr(args, arg)) + '\n')
+ log.close()
+ os.system('cp completion_models/%s.py %s' % (args.model_type, args.log_dir)) # bkp of model scripts
+ os.system('cp train_completion.py %s' % args.log_dir) # bkp of train procedure
+ writer = tf.summary.FileWriter(args.log_dir, sess.graph) # GOOD habit
+
+ log_fout = open(os.path.join(args.log_dir, 'log_train.txt'), 'a+')
+ for arg in sorted(vars(args)):
+ log_fout.write(arg + ': ' + str(getattr(args, arg)) + '\n')
+ log_fout.flush()
+
+ total_time = 0
+ train_start = time.time()
+ init_step = sess.run(global_step)
+
+ for step in range(init_step + 1, args.max_step + 1):
+ epoch = step * args.batch_size // num_train + 1
+ ids, inputs, npts, gt = next(train_gen)
+ if epoch > args.epoch:
+ break
+ if DATASET == 'shapenet8':
+ inputs, npts = vary2fix(inputs, npts)
+
+ start = time.time()
+ feed_dict = {inputs_pl: inputs, npts_pl: npts, gt_pl: gt, is_training_pl: True}
+ _, loss, summary = sess.run([train_op, model.loss, train_summary], feed_dict=feed_dict)
+ total_time += time.time() - start
+ writer.add_summary(summary, step)
+
+ if step % args.steps_per_print == 0:
+ print('epoch %d step %d loss %.8f - time per batch %.4f' %
+ (epoch, step, loss, total_time / args.steps_per_print))
+ total_time = 0
+
+ if step % args.steps_per_eval == 0:
+ print(colored('Testing...', 'grey', 'on_green'))
+ num_eval_steps = num_valid // args.batch_size
+ total_loss, total_time = 0, 0
+ sess.run(tf.local_variables_initializer())
+ for i in range(num_eval_steps):
+ start = time.time()
+ _, inputs, npts, gt = next(valid_gen)
+ if DATASET == 'shapenet8':
+ inputs, npts = vary2fix(inputs, npts)
+ feed_dict = {inputs_pl: inputs, npts_pl: npts, gt_pl: gt, is_training_pl: False}
+ loss, _ = sess.run([model.loss, model.update], feed_dict=feed_dict)
+ total_loss += loss
+ total_time += time.time() - start
+ summary = sess.run(valid_summary, feed_dict={is_training_pl: False})
+ writer.add_summary(summary, step)
+ print(colored('epoch %d step %d loss %.8f - time per batch %.4f' %
+ (epoch, step, total_loss / num_eval_steps, total_time / num_eval_steps),
+ 'grey', 'on_green'))
+ total_time = 0
+
+ if step % args.steps_per_visu == 0:
+ all_pcds = sess.run(model.visualize_ops, feed_dict=feed_dict)
+ for i in range(0, args.batch_size, args.visu_freq):
+ plot_path = os.path.join(args.log_dir, 'plots',
+ 'epoch_%d_step_%d_%s.png' % (epoch, step, ids[i]))
+ pcds = [x[i] for x in all_pcds]
+ plot_pcd_three_views(plot_path, pcds, model.visualize_titles)
+
+ if (epoch % args.epochs_per_save == 0) and \
+ not os.path.exists(os.path.join(args.log_dir, 'model-%d.meta' % epoch)):
+ saver.save(sess, os.path.join(args.log_dir, 'model'), epoch)
+ print(colored('Epoch:%d, Model saved at %s' % (epoch, args.log_dir), 'white', 'on_blue'))
+
+ print('Total time', datetime.timedelta(seconds=time.time() - train_start))
+ sess.close()
+
+
+if __name__ == '__main__':
+
+ print('Now Using GPU:%s to train the model' % args.gpu)
+ os.environ['CUDA_DEVICE_ORDER'] = 'PCI_BUS_ID'
+ os.environ['CUDA_VISIBLE_DEVICES'] = args.gpu
+
+ train(args)
diff --git a/zoo/OcCo/OcCo_TF/utils/Dataset_Assign.py b/zoo/OcCo/OcCo_TF/utils/Dataset_Assign.py
new file mode 100644
index 0000000..74da850
--- /dev/null
+++ b/zoo/OcCo/OcCo_TF/utils/Dataset_Assign.py
@@ -0,0 +1,72 @@
+# Copyright (c) 2020. Author: Hanchen Wang, hc.wang96@gmail.com
+
+import h5py
+
+def Dataset_Assign(dataset, fname, partial=True, bn=False, few_shot=False):
+
+ def fetch_files(filelist):
+ return [item.strip() for item in open(filelist).readlines()]
+
+ def loadh5DataFile(PathtoFile):
+ f = h5py.File(PathtoFile, 'r')
+ return f['data'][:], f['label'][:]
+
+ dataset = dataset.lower()
+
+ if dataset == 'shapenet8':
+ NUM_CLASSES = 8
+ if partial:
+ NUM_TRAINOBJECTS = 231792
+ TRAIN_FILES = fetch_files('./data/shapenet/hdf5_partial_1024/train_file.txt')
+ VALID_FILES = fetch_files('./data/shapenet/hdf5_partial_1024/valid_file.txt')
+ else:
+ raise ValueError("For ShapeNet we are only interested in the partial objects recognition")
+
+ elif dataset == 'shapenet10':
+ # Number of Objects: 17378
+ # Number of Objects: 2492
+ NUM_CLASSES, NUM_TRAINOBJECTS = 10, 17378
+ TRAIN_FILES = fetch_files('./data/ShapeNet10/Cleaned/train_file.txt')
+ VALID_FILES = fetch_files('./data/ShapeNet10/Cleaned/test_file.txt')
+
+ elif dataset == 'modelnet40':
+ '''Actually we find that using data from PointNet++:
+ https://shapenet.cs.stanford.edu/media/modelnet40_normal_resampled.zip
+ will increase the accuracy a bit, however to make a fair comparison: we use the same data as
+ the original data provided by PointNet: https://shapenet.cs.stanford.edu/media/modelnet40_ply_hdf5_2048.zip'''
+ NUM_CLASSES = 40
+ if partial:
+ NUM_TRAINOBJECTS = 98430
+ TRAIN_FILES = fetch_files('./data/modelnet40_pcn/hdf5_partial_1024/train_file.txt')
+ VALID_FILES = fetch_files('./data/modelnet40_pcn/hdf5_partial_1024/test_file.txt')
+ else:
+ VALID_FILES = fetch_files('./data/modelnet40_ply_hdf5_2048/test_files.txt')
+ if few_shot:
+ TRAIN_FILES = fetch_files('./data/modelnet40_ply_hdf5_2048/few_labels/%s.h5' % fname)
+ data, _ = loadh5DataFile('./data/modelnet40_ply_hdf5_2048/few_labels/%s.h5' % fname)
+ NUM_TRAINOBJECTS = len(data)
+ else:
+ NUM_TRAINOBJECTS = 9843
+ TRAIN_FILES = fetch_files('./data/modelnet40_ply_hdf5_2048/train_files.txt')
+
+ elif dataset == 'scannet10':
+ NUM_CLASSES, NUM_TRAINOBJECTS = 10, 6110
+ TRAIN_FILES = fetch_files('./data/ScanNet10/ScanNet_Cleaned/train_file.txt')
+ VALID_FILES = fetch_files('./data/ScanNet10/ScanNet_Cleaned/test_file.txt')
+
+ elif dataset == 'scanobjectnn':
+ NUM_CLASSES = 15
+ if bn:
+ TRAIN_FILES = ['./data/ScanNetObjectNN/h5_files/main_split/training_objectdataset' + fname + '.h5']
+ VALID_FILES = ['./data/ScanNetObjectNN/h5_files/main_split/test_objectdataset' + fname + '.h5']
+ data, _ = loadh5DataFile('./data/ScanNetObjectNN/h5_files/main_split/training_objectdataset' + fname + '.h5')
+ NUM_TRAINOBJECTS = len(data)
+ else:
+ TRAIN_FILES = ['./data/ScanNetObjectNN/h5_files/main_split_nobg/training_objectdataset' + fname + '.h5']
+ VALID_FILES = ['./data/ScanNetObjectNN/h5_files/main_split_nobg/test_objectdataset' + fname + '.h5']
+ data, _ = loadh5DataFile('./data/ScanNetObjectNN/h5_files/main_split_nobg/training_objectdataset' + fname + '.h5')
+ NUM_TRAINOBJECTS = len(data)
+ else:
+ raise ValueError('dataset not exists')
+
+ return NUM_CLASSES, NUM_TRAINOBJECTS, TRAIN_FILES, VALID_FILES
diff --git a/zoo/OcCo/OcCo_TF/utils/EarlyStoppingCriterion.py b/zoo/OcCo/OcCo_TF/utils/EarlyStoppingCriterion.py
new file mode 100644
index 0000000..cfa5104
--- /dev/null
+++ b/zoo/OcCo/OcCo_TF/utils/EarlyStoppingCriterion.py
@@ -0,0 +1,55 @@
+# Copyright (c) 2020. Hanchen Wang, hw501@cam.ac.uk
+
+class EarlyStoppingCriterion(object):
+ """
+ adapted from https://github.com/facebookresearch/hgnn/blob/master/utils/EarlyStoppingCriterion.py
+ Arguments:
+ patience (int): The maximum number of epochs with no improvement before early stopping should take place
+ mode (str, can only be 'max' or 'min'): To take the maximum or minimum of the score for optimization
+ min_delta (float, optional): Minimum change in the score to qualify as an improvement (default: 0.0)
+ """
+
+ def __init__(self, patience=10, mode='max', min_delta=0.0):
+ assert patience >= 0
+ assert mode in {'min', 'max'}
+ assert min_delta >= 0.0
+ self.patience = patience
+ self.mode = mode
+ self.min_delta = min_delta
+
+ self._count = 0
+ self.best_dev_score = None
+ self.best_test_score = None
+ self.best_epoch = None
+ self.is_improved = None
+
+ def step(self, cur_dev_score, epoch):
+ """
+ Checks if training should be continued given the current score.
+ Arguments:
+ cur_dev_score (float): the current development score
+ # cur_test_score (float): the current test score
+ Output:
+ bool: if training should be continued
+ """
+ save_checkpoint = False
+
+ if self.best_dev_score is None:
+ self.best_dev_score = cur_dev_score
+ self.best_epoch = epoch
+ save_checkpoint = True
+ return False, save_checkpoint
+ else:
+ if self.mode == 'max':
+ self.is_improved = (cur_dev_score > self.best_dev_score + self.min_delta)
+ else:
+ self.is_improved = (cur_dev_score < self.best_dev_score - self.min_delta)
+
+ if self.is_improved:
+ self._count = 0
+ self.best_dev_score = cur_dev_score
+ self.best_epoch = epoch
+ save_checkpoint = True
+ else:
+ self._count += 1
+ return self._count >= self.patience, save_checkpoint
diff --git a/zoo/OcCo/OcCo_TF/utils/ModelNetDataLoader.py b/zoo/OcCo/OcCo_TF/utils/ModelNetDataLoader.py
new file mode 100644
index 0000000..e6e01d9
--- /dev/null
+++ b/zoo/OcCo/OcCo_TF/utils/ModelNetDataLoader.py
@@ -0,0 +1,187 @@
+import os, torch, h5py, warnings, numpy as np
+from torch.utils.data import Dataset
+
+warnings.filterwarnings('ignore')
+
+
+def pc_normalize(pc):
+ centroid = np.mean(pc, axis=0)
+ pc = pc - centroid
+ m = np.max(np.sqrt(np.sum(pc ** 2, axis=1)))
+ pc = pc / m
+ return pc
+
+
+def farthest_point_sample(point, npoint):
+ """
+ Input:
+ xyz: point cloud data, [N, D]
+ npoint: number of samples
+ Return:
+ centroids: sampled point cloud index, [npoint, D]
+ """
+ N, D = point.shape
+ xyz = point[:, :3]
+ centroids = np.zeros((npoint,))
+ distance = np.ones((N,)) * 1e10
+ farthest = np.random.randint(0, N)
+ for i in range(npoint):
+ centroids[i] = farthest
+ centroid = xyz[farthest, :]
+ dist = np.sum((xyz - centroid) ** 2, -1)
+ mask = dist < distance
+ distance[mask] = dist[mask]
+ farthest = np.argmax(distance, -1)
+ point = point[centroids.astype(np.int32)]
+ return point
+
+
+class ModelNetDataLoader(Dataset):
+ def __init__(self, root, npoint=1024, split='train', uniform=False, normal_channel=True, cache_size=15000):
+ self.root = root
+ self.npoints = npoint
+ self.uniform = uniform
+ self.catfile = os.path.join(self.root, 'modelnet40_shape_names.txt')
+
+ self.cat = [line.rstrip() for line in open(self.catfile)]
+ self.classes = dict(zip(self.cat, range(len(self.cat))))
+ self.normal_channel = normal_channel
+
+ shape_ids = {'train': [line.rstrip() for line in open(os.path.join(self.root, 'modelnet40_train.txt'))],
+ 'test': [line.rstrip() for line in open(os.path.join(self.root, 'modelnet40_test.txt'))]}
+
+ assert (split == 'train' or split == 'test')
+ shape_names = ['_'.join(x.split('_')[0:-1]) for x in shape_ids[split]]
+ # list of (shape_name, shape_txt_file_path) tuple
+ self.datapath = [(shape_names[i], os.path.join(self.root, shape_names[i], shape_ids[split][i]) + '.txt') for i
+ in range(len(shape_ids[split]))]
+ print('The size of %s data is %d' % (split, len(self.datapath)))
+
+ self.cache_size = cache_size # how many data points to cache in memory
+ self.cache = {} # from index to (point_set, cls) tuple
+
+ def __len__(self):
+ return len(self.datapath)
+
+ def _get_item(self, index):
+ if index in self.cache:
+ point_set, cls = self.cache[index]
+ else:
+ fn = self.datapath[index]
+ cls = self.classes[self.datapath[index][0]]
+ cls = np.array([cls]).astype(np.int32)
+ point_set = np.loadtxt(fn[1], delimiter=',').astype(np.float32)
+ if self.uniform:
+ point_set = farthest_point_sample(point_set, self.npoints)
+ else:
+ point_set = point_set[0:self.npoints, :]
+
+ point_set[:, 0:3] = pc_normalize(point_set[:, 0:3])
+
+ if not self.normal_channel:
+ point_set = point_set[:, 0:3]
+
+ if len(self.cache) < self.cache_size:
+ self.cache[index] = (point_set, cls)
+
+ return point_set, cls
+
+ def __getitem__(self, index):
+ return self._get_item(index)
+
+
+class General_CLSDataLoader_HDF5(Dataset):
+ def __init__(self, file_list, num_point=1024):
+ # self.root = root
+ self.num_point = num_point
+ self.file_list = file_list
+ self.points_list = np.zeros((1, num_point, 3))
+ self.labels_list = np.zeros((1,))
+
+ for file in self.file_list:
+ # pdb.set_trace()
+ # file = os.path.join(root, file)
+ # pdb.set_trace()
+ data, label = self.loadh5DataFile(file)
+ self.points_list = np.concatenate([self.points_list,
+ data[:, :self.num_point, :]], axis=0)
+ self.labels_list = np.concatenate([self.labels_list, label.ravel()], axis=0)
+
+ self.points_list = self.points_list[1:]
+ self.labels_list = self.labels_list[1:]
+ assert len(self.points_list) == len(self.labels_list)
+ print('Number of Objects: ', len(self.labels_list))
+
+ @staticmethod
+ def loadh5DataFile(PathtoFile):
+ f = h5py.File(PathtoFile, 'r')
+ return f['data'][:], f['label'][:]
+
+ def __len__(self):
+ return len(self.points_list)
+
+ def __getitem__(self, index):
+
+ point_xyz = self.points_list[index][:, 0:3]
+ point_label = self.labels_list[index].astype(np.int32)
+
+ return point_xyz, point_label
+
+
+class ModelNetJigsawDataLoader(Dataset):
+ def __init__(self, root=r'./data/modelnet40_ply_hdf5_2048/jigsaw',
+ n_points=1024, split='train', k=3):
+ self.npoints = n_points
+ self.root = root
+ self.split = split
+ assert split in ['train', 'test']
+ if self.split == 'train':
+ self.file_list = [d for d in os.listdir(root) if d.find('train') is not -1]
+ else:
+ self.file_list = [d for d in os.listdir(root) if d.find('test') is not -1]
+ self.points_list = np.zeros((1, n_points, 3))
+ self.labels_list = np.zeros((1, n_points))
+
+ for file in self.file_list:
+ file = os.path.join(root, file)
+ data, label = self.loadh5DataFile(file)
+ # data = np.load(root + file)
+ self.points_list = np.concatenate([self.points_list, data], axis=0) # .append(data)
+ self.labels_list = np.concatenate([self.labels_list, label], axis=0)
+ # self.labels_list.append(label)
+
+ self.points_list = self.points_list[1:]
+ self.labels_list = self.labels_list[1:]
+ assert len(self.points_list) == len(self.labels_list)
+ print('Number of %s Objects: '%self.split, len(self.labels_list))
+
+ # just use the average weights
+ self.labelweights = np.ones(k ** 3)
+
+ # pdb.set_trace()
+
+ @staticmethod
+ def loadh5DataFile(PathtoFile):
+ f = h5py.File(PathtoFile, 'r')
+ return f['data'][:], f['label'][:]
+
+ def __getitem__(self, index):
+
+ point_set = self.points_list[index][:, 0:3]
+ semantic_seg = self.labels_list[index].astype(np.int32)
+ # sample_weight = self.labelweights[semantic_seg]
+
+ # return point_set, semantic_seg, sample_weight
+ return point_set, semantic_seg
+
+ def __len__(self):
+ return len(self.points_list)
+
+
+if __name__ == '__main__':
+
+ data = ModelNetDataLoader('/data/modelnet40_normal_resampled/', split='train', uniform=False, normal_channel=True, )
+ DataLoader = torch.utils.data.DataLoader(data, batch_size=12, shuffle=True)
+ for point, label in DataLoader:
+ print(point.shape)
+ print(label.shape)
diff --git a/zoo/OcCo/OcCo_TF/utils/Train_Logger.py b/zoo/OcCo/OcCo_TF/utils/Train_Logger.py
new file mode 100644
index 0000000..1d1a4ba
--- /dev/null
+++ b/zoo/OcCo/OcCo_TF/utils/Train_Logger.py
@@ -0,0 +1,152 @@
+# Copyright (c) 2020. Hanchen Wang, hw501@cam.ac.uk
+
+import os, logging, datetime, numpy as np, sklearn.metrics as metrics
+from pathlib import Path
+
+
+class TrainLogger:
+
+ def __init__(self, args, name='Model', subfold='cls', cls2name=None):
+ self.step = 1
+ self.epoch = 1
+ self.args = args
+ self.name = name
+ self.sf = subfold
+ self.make_logdir()
+ self.logger_setup()
+ self.epoch_init()
+ self.save_model = False
+ self.cls2name = cls2name
+ self.best_instance_acc, self.best_class_acc = 0., 0.
+ self.best_instance_epoch, self.best_class_epoch = 0, 0
+ self.savepath = str(self.checkpoints_dir) + '/best_model.pth'
+
+ def logger_setup(self):
+ self.logger = logging.getLogger(self.name)
+ self.logger.setLevel(logging.INFO)
+ formatter = logging.Formatter('%(asctime)s - %(name)s - %(levelname)s - %(message)s')
+ file_handler = logging.FileHandler(os.path.join(self.log_dir, 'train_log.txt'))
+ file_handler.setLevel(logging.INFO)
+ file_handler.setFormatter(formatter)
+ # ref: https://stackoverflow.com/a/53496263/12525201
+ # define a Handler which writes INFO messages or higher to the sys.stderr
+ console = logging.StreamHandler()
+ console.setLevel(logging.INFO)
+ # logging.getLogger('').addHandler(console) # this is root logger
+ self.logger.addHandler(console)
+ self.logger.addHandler(file_handler)
+ self.logger.info('PARAMETER ...')
+ self.logger.info(self.args)
+ self.logger.removeHandler(console)
+
+ def make_logdir(self):
+ timestr = str(datetime.datetime.now().strftime('%Y-%m-%d_%H-%M'))
+ experiment_dir = Path('./log/')
+ experiment_dir.mkdir(exist_ok=True)
+ experiment_dir = experiment_dir.joinpath(self.sf)
+ experiment_dir.mkdir(exist_ok=True)
+
+ if self.args.log_dir is None:
+ self.experiment_dir = experiment_dir.joinpath(timestr)
+ else:
+ self.experiment_dir = experiment_dir.joinpath(self.args.log_dir)
+
+ self.experiment_dir.mkdir(exist_ok=True)
+ self.checkpoints_dir = self.experiment_dir.joinpath('checkpoints/')
+ self.checkpoints_dir.mkdir(exist_ok=True)
+ self.log_dir = self.experiment_dir.joinpath('logs/')
+ self.log_dir.mkdir(exist_ok=True)
+ self.experiment_dir.joinpath('runs').mkdir(exist_ok=True)
+
+ # @property.setter
+ def epoch_init(self, training=True):
+ self.loss, self.count, self.pred, self.gt = 0., 0., [], []
+ if training:
+ self.logger.info('\nEpoch %d/%d:' % (self.epoch, self.args.epoch))
+
+ def step_update(self, pred, gt, loss, training=True):
+ if training:
+ self.step += 1
+ self.gt.append(gt)
+ self.pred.append(pred)
+ batch_size = len(pred)
+ self.count += batch_size
+ self.loss += loss * batch_size
+
+ def cls_epoch_update(self, training=True):
+ self.save_model = False
+ self.gt = np.concatenate(self.gt)
+ self.pred = np.concatenate(self.pred)
+ instance_acc = metrics.accuracy_score(self.gt, self.pred)
+ class_acc = metrics.balanced_accuracy_score(self.gt, self.pred)
+
+ if instance_acc > self.best_instance_acc and not training:
+ self.best_instance_acc = instance_acc
+ self.best_instance_epoch = self.epoch
+ self.save_model = True
+ if class_acc > self.best_class_acc and not training:
+ self.best_class_acc = class_acc
+ self.best_class_epoch = self.epoch
+
+ if not training:
+ self.epoch += 1
+ return instance_acc, class_acc
+
+ def seg_epoch_update(self, training=True):
+ self.save_model = False
+ self.gt = np.concatenate(self.gt)
+ self.pred = np.concatenate(self.pred)
+ instance_acc = metrics.accuracy_score(self.gt, self.pred)
+ if instance_acc > self.best_instance_acc and not training:
+ self.best_instance_acc = instance_acc
+ self.best_instance_epoch = self.epoch
+ self.save_model = True
+
+ if not training:
+ self.epoch += 1
+ return instance_acc
+
+ def epoch_summary(self, writer=None, training=True):
+ instance_acc, class_acc = self.cls_epoch_update(training)
+ if training:
+ if writer is not None:
+ writer.add_scalar('Train Class Accuracy', class_acc, self.step)
+ writer.add_scalar('Train Instance Accuracy', instance_acc, self.step)
+ self.logger.info('Train Instance Accuracy: %.3f, Class Accuracy: %.3f' % (instance_acc, class_acc))
+ else:
+ if writer is not None:
+ writer.add_scalar('Test Class Accuracy', class_acc, self.step)
+ writer.add_scalar('Test Instance Accuracy', instance_acc, self.step)
+ self.logger.info('Test Instance Accuracy: %.3f, Class Accuracy: %.3f' % (instance_acc, class_acc))
+ self.logger.info('Best Instance Accuracy: %.3f at Epoch %d ' % (
+ self.best_instance_acc, self.best_instance_epoch))
+ self.logger.info('Best Class Accuracy: %.3f at Epoch %d' % (
+ self.best_class_acc, self.best_class_epoch))
+
+ if self.save_model:
+ self.logger.info('Saving the Model Params to %s' % self.savepath)
+
+ def train_summary(self):
+ self.logger.info('\n\nEnd of Training...')
+ self.logger.info('Best Instance Accuracy: %.3f at Epoch %d ' % (
+ self.best_instance_acc, self.best_instance_epoch))
+ self.logger.info('Best Class Accuracy: %.3f at Epoch %d' % (
+ self.best_class_acc, self.best_class_epoch))
+
+ def update_from_checkpoints(self, checkpoint):
+ self.logger.info('Use Pre-Trained Weights')
+ self.step = checkpoint['step']
+ self.epoch = checkpoint['epoch']
+ self.best_instance_epoch, self.best_instance_acc = checkpoint['epoch'], checkpoint['instance_acc']
+ self.best_class_epoch, self.best_class_acc = checkpoint['best_class_epoch'], checkpoint['best_class_acc']
+ self.logger.info('Best Class Acc {:.3f} at Epoch {}'.format(self.best_instance_acc, self.best_class_epoch))
+ self.logger.info('Best Instance Acc {:.3f} at Epoch {}'.format(self.best_instance_acc, self.best_instance_epoch))
+
+ def update_from_checkpoints_tf(self, checkpoint):
+ self.logger.info('Use Pre-Trained Weights')
+ self.step = checkpoint['step']
+ self.epoch = checkpoint['epoch']
+ self.best_instance_epoch, self.best_instance_acc = checkpoint['epoch'], checkpoint['instance_acc']
+ self.best_class_epoch, self.best_class_acc = checkpoint['best_class_epoch'], checkpoint['best_class_acc']
+ self.logger.info('Best Class Acc {:.3f} at Epoch {}'.format(self.best_instance_acc, self.best_class_epoch))
+ self.logger.info('Best Instance Acc {:.3f} at Epoch {}'.format(self.best_instance_acc, self.best_instance_epoch))
diff --git a/zoo/OcCo/OcCo_TF/utils/__init__.py b/zoo/OcCo/OcCo_TF/utils/__init__.py
new file mode 100644
index 0000000..4acf5ef
--- /dev/null
+++ b/zoo/OcCo/OcCo_TF/utils/__init__.py
@@ -0,0 +1,7 @@
+# Copyright (c) 2020. Hanchen Wang, hw501@cam.ac.uk
+#
+from tf_util import *
+from pc_util import *
+from io_util import *
+from data_util import *
+from visu_util import *
\ No newline at end of file
diff --git a/zoo/OcCo/OcCo_TF/utils/check_num_point.py b/zoo/OcCo/OcCo_TF/utils/check_num_point.py
new file mode 100644
index 0000000..910f277
--- /dev/null
+++ b/zoo/OcCo/OcCo_TF/utils/check_num_point.py
@@ -0,0 +1,83 @@
+# Copyright (c) 2020. Author: Hanchen Wang, hc.wang96@gmail.com
+
+# Author: Hanchen Wang, hw501@cam.ac.uk
+
+import numpy as np
+import os, json, argparse
+from data_util import lmdb_dataflow
+from io_util import read_pcd
+from tqdm import tqdm
+
+MODELNET40_PATH = r"../render/dump_modelnet_normalised_"
+SCANNET10_PATH = r"../data/ScanNet10"
+SHAPENET8_PATH = r"../data/shapenet"
+
+
+if __name__ == "__main__":
+
+ parser = argparse.ArgumentParser()
+ parser.add_argument('--dataset', type=str, default='modelnet40', help="modelnet40, shapenet8 or scannet10")
+
+ args = parser.parse_args()
+ os.system("mkdir -p ./dump_sum_points")
+
+ if args.dataset == 'modelnet40':
+ shape_names = open(r'../render/shape_names.txt').read().splitlines()
+ file_ = open(r'../render/ModelNet_flist_normalised.txt').read().splitlines()
+
+ print("=== ModelNet40 ===\n")
+ for t in ['train', 'test']:
+ # for res in ['fine', 'middle', 'coarse', 'supercoarse']:
+ for res in ['supercoarse']:
+ sum_dict = {}
+ for shape in shape_names:
+ sum_dict[shape] = np.zeros(3,dtype=np.int32) # num of objects, num of points, average
+
+ model_list = [_file for _file in file_ if t in _file]
+ for model_id in tqdm(model_list):
+ model_name = model_id.split('/')[0]
+ for i in range(10):
+ partial_pc = read_pcd(os.path.join(MODELNET40_PATH + res, 'pcd', model_id + '_%d.pcd' % i))
+ sum_dict[model_name][1] += len(partial_pc)
+ sum_dict[model_name][0] += 1
+
+ sum_dict[model_name][2] = sum_dict[model_name][1]/sum_dict[model_name][0]
+
+ f = open("./dump_sum_points/modelnet40_%s_%s.txt" % (t, res), "w+")
+ for key in sum_dict.keys():
+ f.writelines([key, str(sum_dict[key]), '\n'])
+ f.close()
+ print("=== ModelNet40 %s %s Done ===\n" % (t, res))
+
+ elif args.dataset == 'shapenet8':
+ print("\n\n=== ShapeNet8 ===\n")
+ for t in ['train', 'valid']:
+ sum_dict = json.loads(open(os.path.join(SHAPENET8_PATH, 'keys.json')).read())
+ for key in sum_dict.keys():
+ sum_dict[key] = np.zeros(3) # num of objects, num of points, average
+
+ # the data stored in the lmdb files is with varying number of points
+ df, num = lmdb_dataflow(lmdb_path=os.path.join(SHAPENET8_PATH, '%s.lmdb' % t),
+ batch_size=1, input_size=1000000, output_size=1, is_training=False)
+
+ data_gen = df.get_data()
+ for _ in tqdm(range(num)):
+ ids, _, npts, _ = next(data_gen)
+ model_name = ids[0][:8]
+ sum_dict[model_name][1] += npts[0]
+ sum_dict[model_name][0] += 1
+
+ sum_dict[model_name][2] = sum_dict[model_name][1] / sum_dict[model_name][0]
+
+ f = open("./dump_sum_points/shapenet8_%s.json" % t, "w+")
+ for key in sum_dict.keys():
+ f.writelines([key, str(sum_dict[key]), '\n'])
+ # f.write(json.dumps(sum_dict))
+ f.close()
+ print("=== ShapeNet8 %s Done ===\n" % t)
+
+ elif args.dataset == 'scannet10':
+ print("\n\n=== ScanNet10 is not ready yet ===\n")
+
+ else:
+ raise ValueError('Assigned dataset do not exist.')
diff --git a/zoo/OcCo/OcCo_TF/utils/check_scale.py b/zoo/OcCo/OcCo_TF/utils/check_scale.py
new file mode 100644
index 0000000..2b2acdd
--- /dev/null
+++ b/zoo/OcCo/OcCo_TF/utils/check_scale.py
@@ -0,0 +1,89 @@
+# Copyright (c) 2020. Author: Hanchen Wang, hc.wang96@gmail.com
+
+import numpy as np
+import os, open3d, sys
+
+LOG_F = open(r'./scale_sum_modelnet40raw.txt', 'w+')
+open3d.utility.set_verbosity_level = 0
+
+def log_string(msg):
+ print(msg)
+ LOG_F.writelines(msg + '\n')
+
+
+if __name__ == "__main__":
+
+ lmdb_f = r'./data/shapenet/train.lmdb'
+ modelnet_raw_path = r'./data/modelnet40_raw/'
+ shapenet_raw_path = r'./data/ShapeNet_raw/'
+ modelnet40_pn_processed_f = r'./data/'
+
+ off_set, max_radius = 0, 0
+
+ '''=== ModelNet40 ==='''
+ log_string('=== ModelNet40 Raw ===\n\n\n')
+ for root, dirs, files in os.walk(modelnet_raw_path):
+ for name in files:
+ if '.ply' in name:
+ mesh = open3d.io.read_triangle_mesh(os.path.join(root, name))
+ off_set_bias = (mesh.get_center()**2).sum()
+
+ if off_set_bias > off_set:
+ off_set = off_set_bias
+ log_string('update offset: %f by %s' % (off_set, os.path.join(root, name)))
+ radius_bias = (np.asarray(mesh.vertices)**2).sum(axis=1).max()
+
+ if radius_bias > max_radius:
+ max_radius = radius_bias
+ log_string('update max radius: %f by %s' %(max_radius, os.path.join(root, name)))
+ log_string('\n\n\n=== sum for ShapeNetCorev2 ===')
+ log_string('===offset:%f, radius:%f===\n\n\n'%(off_set, max_radius))
+
+ sys.exit('finish computing ModelNet40')
+
+
+ '''=== ShapeNetCore ==='''
+ log_string('=== now on ShapeNetCorev2 ===\n\n\n')
+ for root, dirs, files in os.walk(shapenet_raw_path):
+ for name in files:
+ if '.obj' in name:
+ mesh = open3d.io.read_triangle_mesh(os.path.join(root, name))
+ off_set_bias = (mesh.get_center()**2).sum()
+ if off_set_bias > off_set:
+ off_set = off_set_bias
+ log_string('update offset: %f by %s' % (off_set, os.path.join(root, name)))
+
+ radius_bias = (np.asarray(mesh.vertices)**2).sum(axis=1).max()
+
+ if radius_bias > max_radius:
+ max_radius = radius_bias
+ log_string('update max radius: %f by %s' %(max_radius, os.path.join(root, name)))
+
+ log_string('\n\n\n=== sum for ShapeNetCorev2 ===')
+ log_string('===offset:%f, radius:%f===\n\n\n'%(off_set, max_radius))
+
+ sys.exit('finish computing ShapeNetCorev2')
+
+ '''=== PCN ==='''
+ log_string('===now on PCN cleaned subset of ShapeNet===\n\n\n')
+ df_train, num_train = lmdb_dataflow(lmdb_path = lmdb_f, batch_size=1,
+ input_size=3000, output_size=16384, is_training=True)
+ train_gen = df_train.get_data()
+
+ for idx in range(231792):
+ ids, _, _, gt = next(train_gen)
+ off_set_bias = (gt.mean(axis=1)**2).sum()
+
+ if off_set_bias > off_set:
+ off_set = off_set_bias
+ log_string('update offset: %f by %d, %s' % (off_set, idx, ids))
+
+ radius_bias = (gt**2).sum(axis=2).max()
+
+ if radius_bias > max_radius:
+ max_radius = radius_bias
+ log_string('update max radius: %f by %d, %s' %(max_radius, idx, ids))
+
+ log_string('\n\n\n===for PCN cleaned subset of ShapeNet===')
+ log_string('===offset:%f, radius:%f===\n\n\n'%(off_set, max_radius))
+
diff --git a/zoo/OcCo/OcCo_TF/utils/data_util.py b/zoo/OcCo/OcCo_TF/utils/data_util.py
new file mode 100644
index 0000000..d99da0c
--- /dev/null
+++ b/zoo/OcCo/OcCo_TF/utils/data_util.py
@@ -0,0 +1,101 @@
+# Copyright (c) 2020. Hanchen Wang, hw501@cam.ac.uk
+# Ref:
+
+import numpy as np, tensorflow as tf
+from tensorpack import dataflow
+
+
+def resample_pcd(pcd, n):
+ """drop or duplicate points so that input of each object has exactly n points"""
+ idx = np.random.permutation(pcd.shape[0])
+ if idx.shape[0] < n:
+ idx = np.concatenate([idx, np.random.randint(pcd.shape[0], size=n-pcd.shape[0])])
+ return pcd[idx[:n]]
+
+
+class PreprocessData(dataflow.ProxyDataFlow):
+ def __init__(self, ds, input_size, output_size):
+ # what is ds doing..?
+ super(PreprocessData, self).__init__(ds)
+ self.input_size = input_size
+ self.output_size = output_size
+
+ def get_data(self):
+ for id, input, gt in self.ds.get_data():
+ input = resample_pcd(input, self.input_size)
+ gt = resample_pcd(gt, self.output_size)
+ yield id, input, gt
+
+
+class BatchData(dataflow.ProxyDataFlow):
+ def __init__(self, ds, batch_size, input_size, gt_size, remainder=False, use_list=False):
+ super(BatchData, self).__init__(ds)
+ self.batch_size = batch_size
+ self.input_size = input_size
+ self.gt_size = gt_size
+ self.remainder = remainder
+ self.use_list = use_list
+
+ def __len__(self):
+ """get the number of batches"""
+ ds_size = len(self.ds)
+ div = ds_size // self.batch_size
+ rem = ds_size % self.batch_size
+ if rem == 0:
+ return div
+ return div + int(self.remainder) # int(False) == 0
+
+ def __iter__(self):
+ """generating data in batches"""
+ holder = []
+ for data in self.ds:
+ holder.append(data)
+ if len(holder) == self.batch_size:
+ yield self._aggregate_batch(holder, self.use_list)
+ del holder[:] # reset holder as empty list => holder = []
+ if self.remainder and len(holder) > 0:
+ yield self._aggregate_batch(holder, self.use_list)
+
+ def _aggregate_batch(self, data_holder, use_list=False):
+ """
+ Concatenate input points along the 0-th dimension
+ Stack all other data along the 0-th dimension
+ """
+ ids = np.stack([x[0] for x in data_holder])
+ inputs = [resample_pcd(x[1], self.input_size) if x[1].shape[0] > self.input_size else x[1]
+ for x in data_holder]
+ inputs = np.expand_dims(np.concatenate([x for x in inputs]), 0).astype(np.float32)
+ npts = np.stack([x[1].shape[0] if x[1].shape[0] < self.input_size else self.input_size
+ for x in data_holder]).astype(np.int32)
+ gts = np.stack([resample_pcd(x[2], self.gt_size) for x in data_holder]).astype(np.float32)
+ return ids, inputs, npts, gts
+
+
+def lmdb_dataflow(lmdb_path, batch_size, input_size, output_size, is_training, test_speed=False):
+ """load LMDB files, then generate batches??"""
+ df = dataflow.LMDBSerializer.load(lmdb_path, shuffle=False)
+ size = df.size()
+ if is_training:
+ df = dataflow.LocallyShuffleData(df, buffer_size=2000) # buffer_size
+ df = dataflow.PrefetchData(df, nr_prefetch=500, nr_proc=1) # multiprocess the data
+ df = BatchData(df, batch_size, input_size, output_size)
+ if is_training:
+ df = dataflow.PrefetchDataZMQ(df, nr_proc=8)
+ df = dataflow.RepeatedData(df, -1)
+ if test_speed:
+ dataflow.TestDataSpeed(df, size=1000).start()
+ df.reset_state()
+ return df, size
+
+
+def get_queued_data(generator, dtypes, shapes, queue_capacity=10):
+ assert len(dtypes) == len(shapes), 'dtypes and shapes must have the same length'
+ queue = tf.FIFOQueue(queue_capacity, dtypes, shapes)
+ placeholders = [tf.placeholder(dtype, shape) for dtype, shape in zip(dtypes, shapes)]
+ enqueue_op = queue.enqueue(placeholders)
+ close_op = queue.close(cancel_pending_enqueues=True)
+ feed_fn = lambda: {placeholder: value for placeholder, value in zip(placeholders, next(generator))}
+ queue_runner = tf.contrib.training.FeedingQueueRunner(
+ queue, [enqueue_op], close_op, feed_fns=[feed_fn])
+ tf.train.add_queue_runner(queue_runner)
+ return queue.dequeue()
diff --git a/zoo/OcCo/OcCo_TF/utils/io_util.py b/zoo/OcCo/OcCo_TF/utils/io_util.py
new file mode 100644
index 0000000..ef79cf4
--- /dev/null
+++ b/zoo/OcCo/OcCo_TF/utils/io_util.py
@@ -0,0 +1,46 @@
+# Copyright (c) 2020. Author: Hanchen Wang, hc.wang96@gmail.com
+
+import h5py, numpy as np
+from open3d.open3d.geometry import PointCloud
+from open3d.open3d.utility import Vector3dVector
+from open3d.open3d.io import read_point_cloud, write_point_cloud
+
+
+def read_pcd(filename):
+ pcd = read_point_cloud(filename)
+ return np.array(pcd.points)
+
+
+def save_pcd(filename, points):
+ pcd = PointCloud()
+ pcd.points = Vector3dVector(points)
+ write_point_cloud(filename, pcd)
+
+
+def shuffle_data(data, labels):
+ """ Shuffle data and labels """
+ idx = np.arange(len(labels))
+ np.random.shuffle(idx)
+ return data[idx, ...], labels[idx], idx
+
+
+def loadh5DataFile(PathtoFile):
+ f = h5py.File(PathtoFile, 'r')
+ return f['data'][:], f['label'][:]
+
+
+def getDataFiles(list_filename):
+ return [line.rstrip() for line in open(list_filename)]
+
+
+def save_h5(h5_filename, data, label, data_dtype='uint8', label_dtype='uint8'):
+ h5_fout = h5py.File(h5_filename)
+ h5_fout.create_dataset(
+ name='data', data=data,
+ compression='gzip', compression_opts=4,
+ dtype=data_dtype)
+ h5_fout.create_dataset(
+ name='label', data=label,
+ compression='gzip', compression_opts=1,
+ dtype=label_dtype)
+ h5_fout.close()
diff --git a/zoo/OcCo/OcCo_TF/utils/pc_util.py b/zoo/OcCo/OcCo_TF/utils/pc_util.py
new file mode 100644
index 0000000..caccafe
--- /dev/null
+++ b/zoo/OcCo/OcCo_TF/utils/pc_util.py
@@ -0,0 +1,97 @@
+# Copyright (c) 2020. Author: Hanchen Wang, hc.wang96@gmail.com
+
+import numpy as np
+
+
+def jitter_point_cloud(batch_data, sigma=0.01, clip=0.05):
+ """ Randomly jitter points. jittering is per point.
+ Input:
+ BxNx3 array, original batch of point clouds
+ Return:
+ BxNx3 array, jittered batch of point clouds
+ """
+ B, N, C = batch_data.shape
+ assert(clip > 0)
+ jittered_data = np.clip(sigma * np.random.randn(B, N, C), -1*clip, clip)
+ jittered_data += batch_data
+ return jittered_data
+
+
+def rotate_point_cloud(batch_data):
+ """ Randomly rotate the point clouds to argument the dataset
+ rotation is per shape based along up direction
+ Input:
+ BxNx3 array, original batch of point clouds
+ Return:
+ BxNx3 array, rotated batch of point clouds
+ """
+ rotated_data = np.zeros(batch_data.shape, dtype=np.float32)
+ for k in range(batch_data.shape[0]):
+ rotation_angle = np.random.uniform() * 2 * np.pi
+ cosval = np.cos(rotation_angle)
+ sinval = np.sin(rotation_angle)
+ rotation_matrix = np.array([[cosval, 0, sinval],
+ [0, 1, 0],
+ [-sinval, 0, cosval]])
+ shape_pc = batch_data[k, ...]
+ rotated_data[k, ...] = np.dot(shape_pc.reshape((-1, 3)), rotation_matrix)
+ return rotated_data
+
+
+def rotate_point_cloud_by_angle(batch_data, rotation_angle):
+ """ Rotate the point cloud along up direction with certain angle.
+ Input:
+ BxNx3 array, original batch of point clouds
+ Return:
+ BxNx3 array, rotated batch of point clouds
+ """
+ rotated_data = np.zeros(batch_data.shape, dtype=np.float32)
+ for k in range(batch_data.shape[0]):
+ # rotation_angle = np.random.uniform() * 2 * np.pi
+ cosval = np.cos(rotation_angle)
+ sinval = np.sin(rotation_angle)
+ rotation_matrix = np.array([[cosval, 0, sinval],
+ [0, 1, 0],
+ [-sinval, 0, cosval]])
+ shape_pc = batch_data[k, ...]
+ rotated_data[k, ...] = np.dot(shape_pc.reshape((-1, 3)), rotation_matrix)
+ return rotated_data
+
+
+def random_point_dropout(batch_pc, max_dropout_ratio=0.875):
+ """ batch_pc: BxNx3 """
+ for b in range(batch_pc.shape[0]):
+ # np.random.random() -> Return random floats in the half-open interval [0.0, 1.0).
+ dropout_ratio = np.random.random() * max_dropout_ratio # 0 ~ 0.875
+ drop_idx = np.where(np.random.random((batch_pc.shape[1])) <= dropout_ratio)[0]
+ if len(drop_idx) > 0:
+ batch_pc[b, drop_idx, :] = batch_pc[b, 0, :] # set to the first point
+ return batch_pc
+
+
+def random_scale_point_cloud(batch_data, scale_low=0.8, scale_high=1.25):
+ """ Randomly scale the point cloud. Scale is per point cloud.
+ Input:
+ BxNx3 array, original batch of point clouds
+ Return:
+ BxNx3 array, scaled batch of point clouds
+ """
+ B, N, C = batch_data.shape
+ scales = np.random.uniform(scale_low, scale_high, B)
+ for batch_index in range(B):
+ batch_data[batch_index, :, :] *= scales[batch_index]
+ return batch_data
+
+
+def random_shift_point_cloud(batch_data, shift_range=0.1):
+ """ Randomly shift point cloud. Shift is per point cloud.
+ Input:
+ BxNx3 array, original batch of point clouds
+ Return:
+ BxNx3 array, shifted batch of point clouds
+ """
+ B, N, C = batch_data.shape
+ shifts = np.random.uniform(-shift_range, shift_range, (B, 3))
+ for batch_index in range(B):
+ batch_data[batch_index, :, :] += shifts[batch_index, :]
+ return batch_data
diff --git a/zoo/OcCo/OcCo_TF/utils/tf_util.py b/zoo/OcCo/OcCo_TF/utils/tf_util.py
new file mode 100644
index 0000000..d532dd9
--- /dev/null
+++ b/zoo/OcCo/OcCo_TF/utils/tf_util.py
@@ -0,0 +1,517 @@
+# Copyright (c) 2020. Author: Hanchen Wang, hc.wang96@gmail.com
+
+import tensorflow as tf
+try:
+ from pc_distance import tf_nndistance, tf_approxmatch
+except:
+ pass
+
+'''mlp and conv1d with stride 1 are different'''
+
+
+def mlp(features, layer_dims, bn=None, bn_params=None):
+ # doc: https://www.tensorflow.org/versions/r1.15/api_docs/python/tf/contrib/layers/fully_connected
+ for i, num_outputs in enumerate(layer_dims[:-1]):
+ features = tf.contrib.layers.fully_connected(
+ features, num_outputs,
+ normalizer_fn=bn,
+ normalizer_params=bn_params,
+ scope='fc_%d' % i)
+ outputs = tf.contrib.layers.fully_connected(
+ features, layer_dims[-1],
+ activation_fn=None,
+ scope='fc_%d' % (len(layer_dims) - 1))
+ return outputs
+
+
+def mlp_conv(inputs, layer_dims, bn=None, bn_params=None):
+ # doc: https://www.tensorflow.org/versions/r1.15/api_docs/python/tf/contrib/layers/conv1d
+ for i, num_out_channel in enumerate(layer_dims[:-1]):
+ inputs = tf.contrib.layers.conv1d(
+ inputs, num_out_channel,
+ kernel_size=1,
+ normalizer_fn=bn,
+ normalizer_params=bn_params,
+ scope='conv_%d' % i)
+ # kernel size -> single value for all spatial dimensions
+ # the size of filter should be (1, 3)
+ outputs = tf.contrib.layers.conv1d(
+ inputs, layer_dims[-1],
+ kernel_size=1,
+ activation_fn=None,
+ scope='conv_%d' % (len(layer_dims) - 1))
+ return outputs
+
+
+def point_maxpool(inputs, npts, keepdims=False):
+ # number of points, number of channels -> get the maximum value along the number of channels
+ outputs = [tf.reduce_max(f, axis=1, keepdims=keepdims) for f in tf.split(inputs, npts, axis=1)]
+ return tf.concat(outputs, axis=0)
+
+
+def point_unpool(inputs, npts):
+ inputs = tf.split(inputs, inputs.shape[0], axis=0)
+ outputs = [tf.tile(f, [1, npts[i], 1]) for i, f in enumerate(inputs)]
+ return tf.concat(outputs, axis=1)
+
+
+def chamfer(pcd1, pcd2):
+ """Normalised Chamfer Distance"""
+ dist1, _, dist2, _ = tf_nndistance.nn_distance(pcd1, pcd2)
+ dist1 = tf.reduce_mean(tf.sqrt(dist1))
+ dist2 = tf.reduce_mean(tf.sqrt(dist2))
+ return (dist1 + dist2) / 2
+
+
+def earth_mover(pcd1, pcd2):
+ """Normalised Earth Mover Distance"""
+ assert pcd1.shape[1] == pcd2.shape[1] # has the same number of points
+ num_points = tf.cast(pcd1.shape[1], tf.float32)
+ match = tf_approxmatch.approx_match(pcd1, pcd2)
+ cost = tf_approxmatch.match_cost(pcd1, pcd2, match)
+ return tf.reduce_mean(cost / num_points)
+
+
+def add_train_summary(name, value):
+ tf.summary.scalar(name, value, collections=['train_summary'])
+
+
+def add_valid_summary(name, value):
+ avg, update = tf.metrics.mean(value)
+ tf.summary.scalar(name, avg, collections=['valid_summary'])
+ return update
+
+
+''' === borrow from PointNet === '''
+
+
+def _variable_on_cpu(name, shape, initializer, use_fp16=False):
+ """Helper to create a Variable stored on CPU memory.
+ Args:
+ name: name of the variable
+ shape: list of ints
+ initializer: initializer for Variable
+ use_fp16: use 16 bit float or 32 bit float
+ Returns:
+ Variable Tensor
+ """
+ with tf.device('/cpu:0'):
+ dtype = tf.float16 if use_fp16 else tf.float32
+ var = tf.get_variable(name, shape, initializer=initializer, dtype=dtype)
+ return var
+
+
+def _variable_with_weight_decay(name, shape, stddev, wd, use_xavier=True):
+ """Helper to create an initialized Variable with weight decay.
+
+ Note that the Variable is initialized with a truncated normal distribution.
+ A weight decay is added only if one is specified.
+
+ Args:
+ name: name of the variable
+ shape: list of ints
+ stddev: standard deviation of a truncated Gaussian
+ wd: add L2Loss weight decay multiplied by this float. If None, weight
+ decay is not added for this Variable.
+ use_xavier: bool, whether to use xavier initializer
+
+ Returns:
+ Variable Tensor
+ """
+ if use_xavier:
+ initializer = tf.contrib.layers.xavier_initializer()
+ else:
+ initializer = tf.truncated_normal_initializer(stddev=stddev)
+ var = _variable_on_cpu(name, shape, initializer)
+ if wd is not None:
+ weight_decay = tf.multiply(tf.nn.l2_loss(var), wd, name='weight_loss')
+ tf.add_to_collection('losses', weight_decay)
+ return var
+
+
+def batch_norm_template(inputs, is_training, scope, moments_dims, bn_decay):
+ """ Batch normalization on convolutional maps and beyond...
+ Ref.: http://stackoverflow.com/questions/33949786/how-could-i-use-batch-normalization-in-tensorflow
+
+ Args:
+ inputs: Tensor, k-D input ... x C could be BC or BHWC or BDHWC
+ is_training: boolean tf.Variable, true indicates training phase
+ scope: string, variable scope
+ moments_dims: a list of ints, indicating dimensions for moments calculation
+ bn_decay: float or float tensor variable, controlling moving average weight
+ Return:
+ normed: batch-normalized maps
+ """
+ with tf.variable_scope(scope) as sc:
+ num_channels = inputs.get_shape()[-1].value
+ beta = tf.Variable(tf.constant(0.0, shape=[num_channels]),
+ name='beta', trainable=True)
+ gamma = tf.Variable(tf.constant(1.0, shape=[num_channels]),
+ name='gamma', trainable=True)
+ batch_mean, batch_var = tf.nn.moments(inputs, moments_dims, name='moments') # basically just mean and variance
+ decay = bn_decay if bn_decay is not None else 0.9
+ ema = tf.train.ExponentialMovingAverage(decay=decay)
+ # Operator that maintains moving averages of variables.
+ ema_apply_op = tf.cond(is_training,
+ lambda: ema.apply([batch_mean, batch_var]),
+ lambda: tf.no_op())
+
+ # Update moving average and return current batch's avg and var.
+ def mean_var_with_update():
+ with tf.control_dependencies([ema_apply_op]):
+ return tf.identity(batch_mean), tf.identity(batch_var)
+
+ # ema.average returns the Variable holding the average of var.
+ mean, var = tf.cond(is_training,
+ mean_var_with_update,
+ lambda: (ema.average(batch_mean), ema.average(batch_var)))
+ normed = tf.nn.batch_normalization(inputs, mean, var, beta, gamma, 1e-3)
+ '''tf.nn.batch_normalization(x, mean, variance, offset, scale, variance_epsilon, name=None)'''
+ # θΏιηbeta, gammaδΈζ―3rd/4th moment, ζ―transferred mean and variance
+ # y_i = gamma * x_i + beta, ε
ΆδΈ x_i ζ― normalized δΉεηη»ζ
+ # ref: https://towardsdatascience.com/batch-normalization-theory-and-how-to-use-it-with-tensorflow-1892ca0173ad
+ return normed
+
+
+def batch_norm_for_fc(inputs, is_training, bn_decay, scope):
+ """ Batch normalization on FC data.
+
+ Args:
+ inputs: Tensor, 2D BxC input
+ is_training: boolean tf.Variable, true indicates training phase
+ bn_decay: float or float tensor variable, controlling moving average weight
+ scope: string, variable scope
+ Return:
+ normed: batch-normalized maps
+ """
+ return batch_norm_template(inputs, is_training, scope, [0,], bn_decay)
+
+
+def fully_connected(inputs,
+ num_outputs,
+ scope,
+ use_xavier=True,
+ stddev=1e-3,
+ weight_decay=0.0,
+ activation_fn=tf.nn.relu,
+ bn=False,
+ bias=True,
+ bn_decay=None,
+ is_training=None):
+ """ Fully connected layer with non-linear operation.
+
+ Args:
+ inputs: 2-D tensor BxN
+ num_outputs: int
+
+ Returns:
+ Variable tensor of size B x num_outputs.
+ """
+ with tf.variable_scope(scope) as sc:
+ num_input_units = inputs.get_shape()[-1].value
+ weights = _variable_with_weight_decay('weights',
+ shape=[num_input_units, num_outputs],
+ use_xavier=use_xavier,
+ stddev=stddev,
+ wd=weight_decay)
+ outputs = tf.matmul(inputs, weights)
+ if bias:
+ biases = _variable_on_cpu('biases', [num_outputs],
+ tf.constant_initializer(0.0))
+ outputs = tf.nn.bias_add(outputs, biases)
+
+ if bn:
+ outputs = batch_norm_for_fc(outputs, is_training, bn_decay, 'bn')
+
+ if activation_fn is not None:
+ outputs = activation_fn(outputs)
+ return outputs
+
+
+def max_pool2d(inputs,
+ kernel_size,
+ scope,
+ stride=[2, 2],
+ padding='VALID'):
+ """ 2D max pooling.
+
+ Args:
+ inputs: 4-D tensor BxHxWxC
+ kernel_size: a list of 2 ints
+ stride: a list of 2 ints
+
+ Returns:
+ Variable tensor
+ """
+ with tf.variable_scope(scope) as sc:
+ kernel_h, kernel_w = kernel_size
+ stride_h, stride_w = stride
+ outputs = tf.nn.max_pool(inputs,
+ ksize=[1, kernel_h, kernel_w, 1],
+ strides=[1, stride_h, stride_w, 1],
+ padding=padding,
+ name=sc.name)
+ '''
+ tf.nn.max_pool(value, ksize, strides, padding, data_format='NHWC',
+ name=None, input=None)
+ value: (NHWC) -> Number of Batch * In Height * In Width * In Channel
+ kzise:
+
+ '''
+ return outputs
+
+
+def dropout(inputs,
+ is_training,
+ scope,
+ keep_prob=0.5,
+ noise_shape=None):
+ """ Dropout layer.
+
+ Args:
+ inputs: tensor
+ is_training: boolean tf.Variable
+ scope: string
+ keep_prob: float in [0,1]
+ noise_shape: list of ints
+
+ Returns:
+ tensor variable
+ """
+ with tf.variable_scope(scope) as sc:
+ outputs = tf.cond(is_training,
+ lambda: tf.nn.dropout(inputs, keep_prob, noise_shape),
+ lambda: inputs)
+ return outputs
+
+
+def conv2d(inputs,
+ num_output_channels,
+ kernel_size,
+ scope,
+ stride=[1, 1],
+ padding='SAME',
+ use_xavier=True,
+ stddev=1e-3,
+ weight_decay=0.0,
+ activation_fn=tf.nn.relu,
+ bn=False,
+ bias=True,
+ bn_decay=None,
+ is_training=None):
+ """ 2D convolution with non-linear operation.
+
+ Args:
+ inputs: 4-D tensor variable BxHxWxC (Batch Size * Height * Width * Channel)
+ num_output_channels: int
+ kernel_size: a list of 2 ints
+ scope: string
+ stride: a list of 2 ints
+ padding: 'SAME' or 'VALID'
+ use_xavier: bool, use xavier_initializer if true,
+ xavier initializer is the weights initialization technique
+ that tries to make the variance of the outputs of a layer
+ to be equal to the variance of its inputs
+ stddev: float, stddev for truncated_normal init
+ weight_decay: float
+ activation_fn: function
+ bn: bool, whether to use batch norm
+ bias: bool, whether to add bias or not
+ bn_decay: float or float tensor variable in [0,1] -> actually no idea = =
+ is_training: bool Tensor variable
+
+ Returns:
+ Variable tensor
+ """
+ with tf.variable_scope(scope) as sc:
+ # either [1, 1] or [1, 3]
+ kernel_h, kernel_w = kernel_size
+ # 64, 128, 256, 512, 1028
+ num_in_channels = inputs.get_shape()[-1].value
+ kernel_shape = [kernel_h, kernel_w,
+ num_in_channels, num_output_channels]
+
+ # not using weight_dacay, since we are using xavier initializer,
+ # so stddev is not used since it is the setting for truncated_normal_initializer()
+ kernel = _variable_with_weight_decay('weights',
+ shape=kernel_shape,
+ use_xavier=use_xavier,
+ stddev=stddev,
+ wd=weight_decay)
+ # always [1, 1]
+ stride_h, stride_w = stride
+
+ # tf.nn.conv2d(input, filters, strides, padding, data_format='NHWC', dilations=None, name=None)
+ # filters -> [filter_height, filter_width, in_channels, out_channels], [1,1,1,1] or [1,1,3,1]
+ # -> Point-Based MLPs
+ outputs = tf.nn.conv2d(inputs, kernel,
+ [1, stride_h, stride_w, 1],
+ padding=padding)
+ if bias:
+ biases = _variable_on_cpu('biases', [num_output_channels], tf.constant_initializer(0.0))
+ outputs = tf.nn.bias_add(outputs, biases)
+
+ # always use batch normalisation
+ if bn:
+ outputs = batch_norm_for_conv2d(outputs, is_training, bn_decay=bn_decay, scope='bn')
+
+ # always use relu activation function
+ if activation_fn is not None:
+ outputs = activation_fn(outputs)
+ return outputs
+
+
+def batch_norm_for_conv2d(inputs, is_training, bn_decay, scope):
+ """ Batch normalization on 2D convolutional maps. """
+ return batch_norm_template(inputs, is_training, scope, [0, 1, 2], bn_decay)
+
+
+def batch_norm_template(inputs, is_training, scope, moments_dims, bn_decay):
+ """ Batch normalization on convolutional maps and beyond...
+ Ref.: http://stackoverflow.com/questions/33949786/how-could-i-use-batch-normalization-in-tensorflow
+
+ Args:
+ inputs: Tensor, k-D input ... x C could be BC or BHWC or BDHWC
+ is_training: boolean tf.Variable, true indicates training phase
+ scope: string, variable scope
+ moments_dims: a list of ints, indicating dimensions for moments calculation
+ bn_decay: float or float tensor variable, controlling moving average weight
+ Return:
+ normed: batch-normalized maps
+ """
+ with tf.variable_scope(scope) as sc:
+ num_channels = inputs.get_shape()[-1].value
+ beta = tf.Variable(tf.constant(0.0, shape=[num_channels]),
+ name='beta', trainable=True)
+ gamma = tf.Variable(tf.constant(1.0, shape=[num_channels]),
+ name='gamma', trainable=True)
+ batch_mean, batch_var = tf.nn.moments(inputs, moments_dims, name='moments')
+ # basically mean and variance
+ decay = bn_decay if bn_decay is not None else 0.9
+ # it seems that in the PointNet it is setting as 0.7
+
+ ema = tf.train.ExponentialMovingAverage(decay=decay)
+ # Operator that maintains moving averages of variables.
+ ema_apply_op = tf.cond(is_training,
+ lambda: ema.apply([batch_mean, batch_var]),
+ lambda: tf.no_op())
+
+ # Update moving average and return current batch's avg and var
+ # TODO: what is the window size???
+ def mean_var_with_update():
+ with tf.control_dependencies([ema_apply_op]):
+ return tf.identity(batch_mean), tf.identity(batch_var)
+
+ # ema.average returns the Variable holding the average of var.
+ mean, var = tf.cond(is_training,
+ mean_var_with_update,
+ lambda: (ema.average(batch_mean), ema.average(batch_var)))
+ normed = tf.nn.batch_normalization(inputs, mean, var, beta, gamma, 1e-3)
+ '''tf.nn.batch_normalization(x, mean, variance, offset, scale, variance_epsilon, name=None)'''
+ # θΏιηbeta, gammaδΈζ―3rd/4th moment, ζ―transferred mean and variance
+ # y_i = gamma * x_i + beta, ε
ΆδΈ x_i ζ― normalized δΉεηη»ζ
+ # ref: https://towardsdatascience.com/batch-normalization-theory-and-how-to-use-it-with-tensorflow-1892ca0173ad
+ return normed
+
+
+def pairwise_distance(point_cloud):
+ """Compute pairwise distance of a point cloud.
+ Args:
+ point_cloud: tensor (batch_size, num_points, num_dims)
+
+ Returns:
+ pairwise distance: (batch_size, num_points, num_points)
+ """
+
+ og_batch_size = point_cloud.get_shape().as_list()[0]
+ point_cloud = tf.squeeze(point_cloud)
+ if og_batch_size == 1:
+ point_cloud = tf.expand_dims(point_cloud, 0)
+
+ point_cloud_transpose = tf.transpose(point_cloud, perm=[0, 2, 1])
+ point_cloud_inner = tf.matmul(point_cloud, point_cloud_transpose)
+ point_cloud_inner = -2 * point_cloud_inner
+ point_cloud_square = tf.reduce_sum(tf.square(point_cloud), axis=-1, keep_dims=True)
+ point_cloud_square_tranpose = tf.transpose(point_cloud_square, perm=[0, 2, 1])
+ return point_cloud_square + point_cloud_inner + point_cloud_square_tranpose
+
+
+def knn(adj_matrix, k=20):
+ """ Get KNN based on the pairwise distance.
+ Args:
+ pairwise distance: (batch_size, num_points, num_points)
+ k: int
+
+ Returns:
+ nearest neighbors: (batch_size, num_points, k)
+ """
+ neg_adj = - adj_matrix
+ _, nn_idx = tf.nn.top_k(neg_adj, k=k)
+ return nn_idx
+
+
+def get_edge_feature(point_cloud, nn_idx, k=20):
+ """Construct edge feature for each point
+ Args:
+ point_cloud: (batch_size, num_points, 1, num_dims)
+ nn_idx: (batch_size, num_points, k)
+ k: int
+ Returns:
+ edge features: (batch_size, num_points, k, num_dims)
+ """
+ og_batch_size = point_cloud.get_shape().as_list()[0]
+ point_cloud = tf.squeeze(point_cloud)
+ if og_batch_size == 1:
+ point_cloud = tf.expand_dims(point_cloud, 0)
+
+ point_cloud_central = point_cloud
+
+ point_cloud_shape = point_cloud.get_shape()
+ batch_size = point_cloud_shape[0].value
+ num_points = point_cloud_shape[1].value
+ num_dims = point_cloud_shape[2].value
+
+ idx_ = tf.range(batch_size) * num_points
+ idx_ = tf.reshape(idx_, [batch_size, 1, 1])
+
+ point_cloud_flat = tf.reshape(point_cloud, [-1, num_dims])
+ point_cloud_neighbors = tf.gather(point_cloud_flat, nn_idx + idx_)
+ point_cloud_central = tf.expand_dims(point_cloud_central, axis=-2)
+
+ point_cloud_central = tf.tile(point_cloud_central, [1, 1, k, 1])
+
+ # edge_feature = tf.concat([point_cloud_central, point_cloud_neighbors - point_cloud_central], axis=-1)
+ edge_feature = tf.concat([point_cloud_neighbors - point_cloud_central, point_cloud_central], axis=-1)
+
+ return edge_feature
+
+
+def get_learning_rate(batch, base_lr, batch_size, decay_step, decay_rate, lr_clip):
+ learning_rate = tf.train.exponential_decay(
+ base_lr, # Base learning rate.
+ batch * batch_size, # Current index into the dataset.
+ decay_step, # Decay step.
+ decay_rate, # Decay rate.
+ staircase=True)
+ learning_rate = tf.maximum(learning_rate, lr_clip) # CLIP THE LEARNING RATE!
+ return learning_rate
+
+
+def get_lr_dgcnn(batch, base_lr, batch_size, decay_step, alpha):
+ learning_rate = tf.train.cosine_decay(
+ base_lr, # Base learning rate.
+ batch * batch_size, # Current index into the dataset.
+ decay_step, # Decay step.
+ alpha) # alpha.
+ return learning_rate
+
+
+def get_bn_decay(batch, bn_init_decay, batch_size, bn_decay_step, bn_decay_rate, bn_decay_clip):
+ bn_momentum = tf.train.exponential_decay(
+ bn_init_decay,
+ batch * batch_size,
+ bn_decay_step,
+ bn_decay_rate,
+ staircase=True)
+ bn_decay = tf.minimum(bn_decay_clip, 1 - bn_momentum)
+ return bn_decay
diff --git a/zoo/OcCo/OcCo_TF/utils/transfer_pretrained_w.py b/zoo/OcCo/OcCo_TF/utils/transfer_pretrained_w.py
new file mode 100644
index 0000000..2aa4d45
--- /dev/null
+++ b/zoo/OcCo/OcCo_TF/utils/transfer_pretrained_w.py
@@ -0,0 +1,85 @@
+# Copyright (c) 2020. Author: Hanchen Wang, hc.wang96@gmail.com
+
+import os, argparse, tensorflow as tf
+from tensorflow.python import pywrap_tensorflow
+from termcolor import colored
+
+def load_para_from_saved_model(model_path, verbose=False):
+ """load the all parameters from the saved TensorFlow checkpoint
+ the format is dict -> {var_name(str): var_value(numpy array)}"""
+ reader = pywrap_tensorflow.NewCheckpointReader(model_path)
+ var_to_map = reader.get_variable_to_shape_map()
+
+ print('\n============================')
+ print('model checkpoint: ', model_path)
+ print('checkpoint has been loaded')
+ for key in var_to_map.keys():
+ var_to_map[key] = reader.get_tensor(key)
+ if verbose:
+ print('tensor_name:', key, ' shape:', reader.get_tensor(key).shape)
+ print('============================\n')
+ return var_to_map
+
+
+def intersec_saved_var(model_path1, model_path2, verbose=False):
+ """find the intersection of two saved models in terms of variable names"""
+ var_to_map_1 = load_para_from_saved_model(model_path1, verbose=verbose)
+ var_to_map_2 = load_para_from_saved_model(model_path2, verbose=verbose)
+
+ # list of shared variable
+ intersect = [*set(var_to_map_1.keys()).intersection(set(var_to_map_2.keys())), ]
+
+ if verbose:
+ print('\n=======================')
+ print('the shared variables are:')
+ print(intersect)
+
+ return var_to_map_1, var_to_map_2, intersect
+
+
+def load_pretrained_var(source_model_path, target_model_path, verbose=False):
+ """save the parameters from source to target for variables in the intersection"""
+ var_map_source, var_map_target, intersect = intersec_saved_var(
+ source_model_path, target_model_path, verbose=verbose)
+
+ out_f = open('para_restored.txt', 'w+')
+
+ with tf.Session() as my_sess:
+ new_var_list = []
+ for var in var_map_target.keys():
+ # pdb.set_trace()
+ if (var in intersect) and (var_map_source[var].shape == var_map_target[var].shape):
+ new_var = tf.Variable(var_map_source[var], name=var)
+ if verbose:
+ print('%s has been restored from the pre-trained %s' % (var, source_model_path))
+ out_f.writelines('Restored: %s has been restored from the pre-trained %s\n' % (var, source_model_path))
+ else:
+ new_var = tf.Variable(var_map_target[var], name=var)
+ if verbose:
+ print('%s has been restored from the random initialized %s' % (var, target_model_path))
+ out_f.writelines('Random Initialised: %s\n' % var)
+ new_var_list.append(new_var)
+ print('start to write the new checkpoint')
+ my_sess.run(tf.global_variables_initializer())
+ my_saver = tf.train.Saver(var_list=new_var_list)
+ my_saver.save(my_sess, target_model_path)
+ print(colored('source weights has been restored', 'white', 'on_blue'))
+
+ my_sess.close()
+ out_f.close()
+ return None
+
+
+if __name__ == '__main__':
+
+ parser = argparse.ArgumentParser()
+ parser.add_argument('--source_path', default='./pretrained/pcn_cd')
+ parser.add_argument('--target_path', default='./log/pcn_cls_shapenet8_pretrained_init/model.ckpt')
+ parser.add_argument('--gpu', default='0')
+ parser.add_argument('--verbose', type=bool, default=True)
+ args = parser.parse_args()
+
+ os.environ['CUDA_DEVICE_ORDER'] = 'PCI_BUS_ID'
+ os.environ['CUDA_VISIBLE_DEVICES'] = str(args.gpu)
+
+ load_pretrained_var(args.source_path, args.target_path, args.verbose)
diff --git a/zoo/OcCo/OcCo_TF/utils/transform_nets.py b/zoo/OcCo/OcCo_TF/utils/transform_nets.py
new file mode 100644
index 0000000..a034c34
--- /dev/null
+++ b/zoo/OcCo/OcCo_TF/utils/transform_nets.py
@@ -0,0 +1,153 @@
+# Copyright (c) 2020. Author: Hanchen Wang, hc.wang96@gmail.com
+
+import os, sys, numpy as np, tensorflow as tf
+import tf_util
+
+
+def input_transform_net_dgcnn(edge_feature, is_training, bn_decay=None, K=3):
+ """ Input (XYZ) Transform Net, input is BxNx3 gray image
+ Return:
+ Transformation matrix of size 3xK """
+
+ batch_size = edge_feature.get_shape()[0].value
+ num_point = edge_feature.get_shape()[1].value
+
+ net = tf_util.conv2d(edge_feature, 64, [1, 1],
+ padding='VALID', stride=[1, 1],
+ bn=True, is_training=is_training,
+ scope='tconv1', bn_decay=bn_decay)
+ net = tf_util.conv2d(net, 128, [1, 1],
+ padding='VALID', stride=[1, 1],
+ bn=True, is_training=is_training,
+ scope='tconv2', bn_decay=bn_decay)
+
+ net = tf.reduce_max(net, axis=-2, keep_dims=True)
+
+ net = tf_util.conv2d(net, 1024, [1, 1],
+ padding='VALID', stride=[1, 1],
+ bn=True, is_training=is_training,
+ scope='tconv3', bn_decay=bn_decay)
+ net = tf_util.max_pool2d(net, [num_point, 1],
+ padding='VALID', scope='tmaxpool')
+
+ net = tf.reshape(net, [batch_size, -1])
+ net = tf_util.fully_connected(net, 512, bn=True, is_training=is_training,
+ scope='tfc1', bn_decay=bn_decay)
+ net = tf_util.fully_connected(net, 256, bn=True, is_training=is_training,
+ scope='tfc2', bn_decay=bn_decay)
+
+ with tf.variable_scope('transform_XYZ') as sc:
+ # assert(K==3)
+ with tf.device('/cpu:0'):
+ weights = tf.get_variable('weights', [256, K * K],
+ initializer=tf.constant_initializer(0.0),
+ dtype=tf.float32)
+ biases = tf.get_variable('biases', [K * K],
+ initializer=tf.constant_initializer(0.0),
+ dtype=tf.float32)
+ biases += tf.constant(np.eye(K).flatten(), dtype=tf.float32)
+ transform = tf.matmul(net, weights)
+ transform = tf.nn.bias_add(transform, biases)
+
+ transform = tf.reshape(transform, [batch_size, K, K])
+ return transform
+
+
+def input_transform_net(point_cloud, is_training, bn_decay=None, K=3):
+ """ Input (XYZ) Transform Net, input is BxNx3 gray image
+ Return:
+ Transformation matrix of size 3xK """
+ # print('the input shape for t-net:', point_cloud.get_shape())
+ batch_size = point_cloud.get_shape()[0].value
+ num_point = point_cloud.get_shape()[1].value
+ # point_cloud -> Tensor of (batch size, number of points, 3d coordinates)
+
+ input_image = tf.expand_dims(point_cloud, -1)
+ # point_cloud -> (batch size, number of points, 3d coordinates, 1)
+ # batch size * height * width * channel
+
+ '''tf_util.conv2d(inputs, num_output_channels, kernel_size, scope, stride=[1, 1], padding='SAME',
+ use_xavier=True, stddev=1e-3, weight_decay=0.0, activation_fn=tf.nn.relu,
+ bn=False, bn_decay=None(default is set to 0.9), is_training=None)'''
+ net = tf_util.conv2d(input_image, 64, [1, 3],
+ padding='VALID', stride=[1, 1],
+ bn=True, is_training=is_training,
+ scope='tconv1', bn_decay=bn_decay)
+ net = tf_util.conv2d(net, 128, [1, 1],
+ padding='VALID', stride=[1, 1],
+ bn=True, is_training=is_training,
+ scope='tconv2', bn_decay=bn_decay)
+ net = tf_util.conv2d(net, 1024, [1, 1],
+ padding='VALID', stride=[1, 1],
+ bn=True, is_training=is_training,
+ scope='tconv3', bn_decay=bn_decay)
+
+ # net = mlp_conv(input_image, [64, 128, 1024])
+ net = tf_util.max_pool2d(net, [num_point, 1],
+ padding='VALID', scope='tmaxpool')
+ '''(default stride: (2, 2))'''
+ # net = tf.reduce_max(net, axis=1, keep_dims=True, name='tmaxpool')
+
+ net = tf.reshape(net, [batch_size, -1])
+ net = tf_util.fully_connected(net, 512, bn=True, is_training=is_training,
+ scope='tfc1', bn_decay=bn_decay)
+ net = tf_util.fully_connected(net, 256, bn=True, is_training=is_training,
+ scope='tfc2', bn_decay=bn_decay)
+
+ with tf.variable_scope('transform_XYZ') as sc:
+ assert (K == 3)
+ weights = tf.get_variable('weights', [256, 3 * K],
+ initializer=tf.constant_initializer(0.0),
+ dtype=tf.float32)
+ biases = tf.get_variable('biases', [3 * K],
+ initializer=tf.constant_initializer(0.0),
+ dtype=tf.float32)
+ biases += tf.constant([1, 0, 0, 0, 1, 0, 0, 0, 1], dtype=tf.float32)
+ transform = tf.matmul(net, weights)
+ transform = tf.nn.bias_add(transform, biases)
+
+ transform = tf.reshape(transform, [batch_size, 3, K])
+ return transform
+
+
+def feature_transform_net(inputs, is_training, bn_decay=None, K=64):
+ """ Feature Transform Net, input is BxNx1xK
+ Return:
+ Transformation matrix of size KxK """
+ batch_size = inputs.get_shape()[0].value
+ num_point = inputs.get_shape()[1].value
+
+ net = tf_util.conv2d(inputs, 64, [1, 1],
+ padding='VALID', stride=[1, 1],
+ bn=True, is_training=is_training,
+ scope='tconv1', bn_decay=bn_decay)
+ net = tf_util.conv2d(net, 128, [1, 1],
+ padding='VALID', stride=[1, 1],
+ bn=True, is_training=is_training,
+ scope='tconv2', bn_decay=bn_decay)
+ net = tf_util.conv2d(net, 1024, [1, 1],
+ padding='VALID', stride=[1, 1],
+ bn=True, is_training=is_training,
+ scope='tconv3', bn_decay=bn_decay)
+ net = tf_util.max_pool2d(net, [num_point, 1],
+ padding='VALID', scope='tmaxpool')
+
+ net = tf.reshape(net, [batch_size, -1])
+ net = tf_util.fully_connected(net, 512, bn=True, is_training=is_training,
+ scope='tfc1', bn_decay=bn_decay)
+ net = tf_util.fully_connected(net, 256, bn=True, is_training=is_training,
+ scope='tfc2', bn_decay=bn_decay)
+
+ with tf.variable_scope('transform_feat') as sc:
+ weights = tf.get_variable('weights', [256, K * K],
+ initializer=tf.constant_initializer(0.0),
+ dtype=tf.float32)
+ biases = tf.get_variable('biases', [K * K],
+ initializer=tf.constant_initializer(0.0),
+ dtype=tf.float32)
+ biases += tf.constant(np.eye(K).flatten(), dtype=tf.float32)
+ transform = tf.matmul(net, weights)
+ transform = tf.nn.bias_add(transform, biases)
+
+ transform = tf.reshape(transform, [batch_size, K, K])
+ return transform
diff --git a/zoo/OcCo/OcCo_TF/utils/visu_util.py b/zoo/OcCo/OcCo_TF/utils/visu_util.py
new file mode 100644
index 0000000..71fbfda
--- /dev/null
+++ b/zoo/OcCo/OcCo_TF/utils/visu_util.py
@@ -0,0 +1,43 @@
+# Copyright (c) 2020. Author: Hanchen Wang, hc.wang96@gmail.com
+# Original Author: Wentao Yuan (wyuan1@cs.cmu.edu) 05/31/2018
+
+import numpy as np
+from matplotlib import pyplot as plt
+from mpl_toolkits.mplot3d import Axes3D
+from open3d.open3d.io import read_point_cloud
+# from open3d.open3d_pybind.io import read_point_cloud
+
+
+def plot_pcd_three_views(filename, pcds, titles, suptitle='', sizes=None, cmap='Reds', zdir='y',
+ xlim=(-0.3, 0.3), ylim=(-0.3, 0.3), zlim=(-0.3, 0.3)):
+ if sizes is None:
+ sizes = [0.5 for _ in range(len(pcds))]
+ fig = plt.figure(figsize=(len(pcds) * 3, 9))
+ for i in range(3):
+ elev = 30
+ azim = -45 + 90 * i
+ for j, (pcd, size) in enumerate(zip(pcds, sizes)):
+ color = pcd[:, 0]
+ ax = fig.add_subplot(3, len(pcds), i * len(pcds) + j + 1, projection='3d')
+ ax.view_init(elev, azim)
+ ax.scatter(pcd[:, 0], pcd[:, 1], pcd[:, 2], zdir=zdir, c=color, s=size, cmap=cmap, vmin=-1, vmax=0.5)
+ ax.set_title(titles[j])
+ ax.set_axis_off()
+ ax.set_xlim(xlim)
+ ax.set_ylim(ylim)
+ ax.set_zlim(zlim)
+ plt.subplots_adjust(left=0.05, right=0.95, bottom=0.05, top=0.9, wspace=0.1, hspace=0.1)
+ plt.suptitle(suptitle)
+ fig.savefig(filename)
+ plt.close(fig)
+
+
+if __name__ == "__main__":
+ filenames = ['airplane.pcd', 'car.pcd', 'chair.pcd', 'lamp.pcd'] # '../demo_data'
+ for file in filenames:
+ filename = file.replace('.pcd', '')
+ pcds = [np.asarray(read_point_cloud('../demo_data/' + file).points)]
+ titles = ['viewpoint 1', 'viewpoint 2', 'viewpoint 3']
+ plot_pcd_three_views(
+ filename, pcds, titles, suptitle=filename, sizes=None, cmap='viridis', zdir='y',
+ xlim=(-0.3, 0.3), ylim=(-0.3, 0.3), zlim=(-0.3, 0.3))
diff --git a/zoo/OcCo/OcCo_Torch/Requirements_Torch.txt b/zoo/OcCo/OcCo_Torch/Requirements_Torch.txt
new file mode 100644
index 0000000..8e6028c
--- /dev/null
+++ b/zoo/OcCo/OcCo_Torch/Requirements_Torch.txt
@@ -0,0 +1,14 @@
+# Originally Designed for Docker Environment:
+# PyTorch 1.3.0, Python 3.6, CUDA 10.1
+# install PyTorch first if not use docker
+lmdb >= 0.98
+h5py >= 2.10.0
+future >= 0.18.2
+pyarrow >= 1.0.0
+open3d == 0.9.0.0
+matplotlib >= 3.3.0
+tensorpack == 0.9.8
+tensorboard >= 1.15.0
+python-prctl >= 1.5.0
+open3d-python==0.7.0.0
+scikit-learn >= 0.23.1
diff --git a/zoo/OcCo/OcCo_Torch/bash_template/train_cls_template.sh b/zoo/OcCo/OcCo_Torch/bash_template/train_cls_template.sh
new file mode 100644
index 0000000..11553e4
--- /dev/null
+++ b/zoo/OcCo/OcCo_Torch/bash_template/train_cls_template.sh
@@ -0,0 +1,46 @@
+#!/usr/bin/env bash
+
+cd ../
+
+# training pointnet on ModelNet40, from scratch
+python train_cls.py \
+ --gpu 0 \
+ --model pointnet_cls \
+ --dataset modelnet40 \
+ --log_dir modelnet40_pointnet_scratch ;
+
+
+# fine tuning pcn on ScanNet10, using jigsaw pre-trained checkpoints
+python train_cls.py \
+ --gpu 0 \
+ --model pcn_cls \
+ --dataset scannet10 \
+ --log_dir scannet10_pcn_jigsaw \
+ --restore \
+ --restore_path log/jigsaw/modelnet_pcn_vanilla/checkpoints/best_model.pth ;
+
+
+# fine tuning dgcnn on ScanObjectNN(OBJ_BG), using jigsaw pre-trained checkpoints
+python train_cls.py \
+ --gpu 0,1 \
+ --epoch 250 \
+ --use_sgd \
+ --scheduler cos \
+ --model dgcnn_cls \
+ --dataset scanobjectnn \
+ --bn \
+ --log_dir scanobjectnn_dgcnn_occo \
+ --restore \
+ --restore_path log/completion/modelnet_dgcnn_vanilla/checkpoints/best_model.pth ;
+
+
+# test pointnet on ModelNet40 from pre-trained checkpoints
+python train_cls.py \
+ --gpu 1 \
+ --epoch 1 \
+ --mode test \
+ --model pointnet_cls \
+ --dataset modelnet40 \
+ --log_dir modelnet40_pointnet_scratch \
+ --restore \
+ --restore_path log/cls/modelnet40_pointnet_scratch/checkpoints/best_model.pth ;
diff --git a/zoo/OcCo/OcCo_Torch/bash_template/train_completion_template.sh b/zoo/OcCo/OcCo_Torch/bash_template/train_completion_template.sh
new file mode 100644
index 0000000..f9f4fb6
--- /dev/null
+++ b/zoo/OcCo/OcCo_Torch/bash_template/train_completion_template.sh
@@ -0,0 +1,18 @@
+#!/usr/bin/env bash
+
+cd ../
+
+# train pointnet-occo model on ModelNet, from scratch
+python train_completion.py \
+ --gpu 0,1 \
+ --dataset modelnet \
+ --model pointnet_occo \
+ --log_dir modelnet_pointnet_vanilla ;
+
+# train dgcnn-occo model on ShapeNet, from scratch
+python train_completion.py \
+ --gpu 0,1 \
+ --batch_size 16 \
+ --dataset shapenet \
+ --model dgcnn_occo \
+ --log_dir shapenet_dgcnn_vanilla ;
diff --git a/zoo/OcCo/OcCo_Torch/bash_template/train_jigsaw_template.sh b/zoo/OcCo/OcCo_Torch/bash_template/train_jigsaw_template.sh
new file mode 100644
index 0000000..e445843
--- /dev/null
+++ b/zoo/OcCo/OcCo_Torch/bash_template/train_jigsaw_template.sh
@@ -0,0 +1,22 @@
+#!/usr/bin/env bash
+
+cd ../
+
+# train pointnet_jigsaw on ModelNet40, from scratch
+python train_jigsaw.py \
+ --gpu 0 \
+ --model pointnet_jigsaw \
+ --bn_decay \
+ --xavier_init \
+ --optimiser Adam \
+ --scheduler step \
+ --log_dir modelnet40_pointnet_scratch ;
+
+
+# train dgcnn_jigsaw on ModelNet40, from scratch
+python train_jigsaw.py \
+ --gpu 0 \
+ --model dgcnn_jigsaw \
+ --optimiser SGD \
+ --scheduler cos \
+ --log_dir modelnet40_dgcnn_scartch ;
diff --git a/zoo/OcCo/OcCo_Torch/bash_template/train_partseg_template.sh b/zoo/OcCo/OcCo_Torch/bash_template/train_partseg_template.sh
new file mode 100644
index 0000000..7eaf085
--- /dev/null
+++ b/zoo/OcCo/OcCo_Torch/bash_template/train_partseg_template.sh
@@ -0,0 +1,49 @@
+#!/usr/bin/env bash
+
+cd ../
+
+# training pointnet on ShapeNetPart, from scratch
+python train_partseg.py \
+ --gpu 0 \
+ --normal \
+ --bn_decay \
+ --xavier_init \
+ --model pointnet_partseg \
+ --log_dir pointnet_scratch ;
+
+
+# fine tuning pcn on ShapeNetPart, using jigsaw pre-trained checkpoints
+python train_partseg.py \
+ --gpu 0 \
+ --normal \
+ --bn_decay \
+ --xavier_init \
+ --model pcn_partseg \
+ --log_dir pcn_jigsaw \
+ --restore \
+ --restore_path log/jigsaw/modelnet_pcn_vanilla/checkpoints/best_model.pth ;
+
+
+# fine tuning dgcnn on ShapeNetPart, using occo pre-trained checkpoints
+python train_partseg.py \
+ --gpu 0, 1 \
+ --normal \
+ --use_sgd \
+ --xavier_init \
+ --scheduler cos \
+ --model dgcnn_partseg \
+ --log_dir dgcnn_occo \
+ --restore \
+ --restore_path log/completion/modelnet_dgcnn_vanilla/checkpoints/best_model.pth ;
+
+
+# test fine tuned pointnet on ShapeNetPart, using multiple votes
+python train_partseg.py \
+ --gpu 0 \
+ --epoch 1 \
+ --mode test \
+ --num_votes 3 \
+ --model pointnet_partseg \
+ --log_dir pointnet_scratch \
+ --restore \
+ --restore_path log/partseg/pointnet_occo/checkpoints/best_model.pth ;
diff --git a/zoo/OcCo/OcCo_Torch/bash_template/train_semseg_template.sh b/zoo/OcCo/OcCo_Torch/bash_template/train_semseg_template.sh
new file mode 100644
index 0000000..39c026c
--- /dev/null
+++ b/zoo/OcCo/OcCo_Torch/bash_template/train_semseg_template.sh
@@ -0,0 +1,43 @@
+#!/usr/bin/env bash
+
+cd ../
+
+# train pointnet_semseg on 6-fold cv of S3DIS, from scratch
+for area in $(seq 1 1 6)
+do
+python train_semseg.py \
+ --gpu 0,1 \
+ --model pointnet_semseg \
+ --bn_decay \
+ --xavier_init \
+ --test_area ${area} \
+ --scheduler step \
+ --log_dir pointnet_area${area}_scratch ;
+done
+
+# fine tune pcn_semseg on 6-fold cv of S3DIS, using jigsaw pre-trained weights
+for area in $(seq 1 1 6)
+do
+python train_semseg.py \
+ --gpu 0,1 \
+ --model pcn_semseg \
+ --bn_decay \
+ --test_area ${area} \
+ --log_dir pcn_area${area}_jigsaw \
+ --restore \
+ --restore_path log/jigsaw/modelnet_pcn_vanilla/checkpoints/best_model.pth ;
+done
+
+# fine tune dgcnn_semseg on 6-fold cv of S3DIS, using occo pre-trained weights
+for area in $(seq 1 1 6)
+do
+python train_semseg.py \
+ --gpu 0,1 \
+ --test_area ${area} \
+ --optimizer sgd \
+ --scheduler cos \
+ --model dgcnn_semseg \
+ --log_dir dgcnn_area${area}_occo \
+ --restore \
+ --restore_path log/completion/modelnet_dgcnn_vanilla/checkpoints/best_model.pth ;
+done
diff --git a/zoo/OcCo/OcCo_Torch/bash_template/train_svm_template.sh b/zoo/OcCo/OcCo_Torch/bash_template/train_svm_template.sh
new file mode 100644
index 0000000..6c747e4
--- /dev/null
+++ b/zoo/OcCo/OcCo_Torch/bash_template/train_svm_template.sh
@@ -0,0 +1,30 @@
+#!/usr/bin/env bash
+
+cd ../
+
+# fit a linear svm on ModelNet40 encoded by OcCo PointNet
+python train_svm.py \
+ --gpu 0 \
+ --model pointnet_util \
+ --dataset modelnet40 \
+ --restore_path log/completion/modelnet_pointnet_vanilla/checkpoints/best_model.pth ;
+
+
+# grid search the best parameters of a svm with rbf kernel on ModelNet40 encoded by OcCo PCN
+python train_svm.py \
+ --gpu 0 \
+ --grid_search \
+ --model pcn_util \
+ --dataset modelnet40 \
+ --restore_path log/completion/modelnet_pcn_vanilla/checkpoints/best_model.pth ;
+
+
+# ... on ScanObjectNN(OBJ_BG) encoded by OcCo DGCNN
+python train_svm.py \
+ --gpu 0 \
+ --grid_search \
+ --batch_size 8 \
+ --model dgcnn_util \
+ --dataset scanobjectnn \
+ --bn \
+ --restore_path log/completion/modelnet_dgcnn_vanilla/checkpoints/best_model.pth ;
diff --git a/zoo/OcCo/OcCo_Torch/chamfer_distance/__init__.py b/zoo/OcCo/OcCo_Torch/chamfer_distance/__init__.py
new file mode 100644
index 0000000..2e15be7
--- /dev/null
+++ b/zoo/OcCo/OcCo_Torch/chamfer_distance/__init__.py
@@ -0,0 +1 @@
+from .chamfer_distance import ChamferDistance
diff --git a/zoo/OcCo/OcCo_Torch/chamfer_distance/chamfer_distance.cpp b/zoo/OcCo/OcCo_Torch/chamfer_distance/chamfer_distance.cpp
new file mode 100644
index 0000000..40f3d79
--- /dev/null
+++ b/zoo/OcCo/OcCo_Torch/chamfer_distance/chamfer_distance.cpp
@@ -0,0 +1,185 @@
+#include
+
+// CUDA forward declarations
+int ChamferDistanceKernelLauncher(
+ const int b, const int n,
+ const float* xyz,
+ const int m,
+ const float* xyz2,
+ float* result,
+ int* result_i,
+ float* result2,
+ int* result2_i);
+
+int ChamferDistanceGradKernelLauncher(
+ const int b, const int n,
+ const float* xyz1,
+ const int m,
+ const float* xyz2,
+ const float* grad_dist1,
+ const int* idx1,
+ const float* grad_dist2,
+ const int* idx2,
+ float* grad_xyz1,
+ float* grad_xyz2);
+
+
+void chamfer_distance_forward_cuda(
+ const at::Tensor xyz1,
+ const at::Tensor xyz2,
+ const at::Tensor dist1,
+ const at::Tensor dist2,
+ const at::Tensor idx1,
+ const at::Tensor idx2)
+{
+ ChamferDistanceKernelLauncher(xyz1.size(0), xyz1.size(1), xyz1.data(),
+ xyz2.size(1), xyz2.data(),
+ dist1.data(), idx1.data(),
+ dist2.data(), idx2.data());
+}
+
+void chamfer_distance_backward_cuda(
+ const at::Tensor xyz1,
+ const at::Tensor xyz2,
+ at::Tensor gradxyz1,
+ at::Tensor gradxyz2,
+ at::Tensor graddist1,
+ at::Tensor graddist2,
+ at::Tensor idx1,
+ at::Tensor idx2)
+{
+ ChamferDistanceGradKernelLauncher(xyz1.size(0), xyz1.size(1), xyz1.data(),
+ xyz2.size(1), xyz2.data(),
+ graddist1.data(), idx1.data(),
+ graddist2.data(), idx2.data(),
+ gradxyz1.data(), gradxyz2.data());
+}
+
+
+void nnsearch(
+ const int b, const int n, const int m,
+ const float* xyz1,
+ const float* xyz2,
+ float* dist,
+ int* idx)
+{
+ for (int i = 0; i < b; i++) {
+ for (int j = 0; j < n; j++) {
+ const float x1 = xyz1[(i*n+j)*3+0];
+ const float y1 = xyz1[(i*n+j)*3+1];
+ const float z1 = xyz1[(i*n+j)*3+2];
+ double best = 0;
+ int besti = 0;
+ for (int k = 0; k < m; k++) {
+ const float x2 = xyz2[(i*m+k)*3+0] - x1;
+ const float y2 = xyz2[(i*m+k)*3+1] - y1;
+ const float z2 = xyz2[(i*m+k)*3+2] - z1;
+ const double d=x2*x2+y2*y2+z2*z2;
+ if (k==0 || d < best){
+ best = d;
+ besti = k;
+ }
+ }
+ dist[i*n+j] = best;
+ idx[i*n+j] = besti;
+ }
+ }
+}
+
+
+void chamfer_distance_forward(
+ const at::Tensor xyz1,
+ const at::Tensor xyz2,
+ const at::Tensor dist1,
+ const at::Tensor dist2,
+ const at::Tensor idx1,
+ const at::Tensor idx2)
+{
+ const int batchsize = xyz1.size(0);
+ const int n = xyz1.size(1);
+ const int m = xyz2.size(1);
+
+ const float* xyz1_data = xyz1.data();
+ const float* xyz2_data = xyz2.data();
+ float* dist1_data = dist1.data();
+ float* dist2_data = dist2.data();
+ int* idx1_data = idx1.data();
+ int* idx2_data = idx2.data();
+
+ nnsearch(batchsize, n, m, xyz1_data, xyz2_data, dist1_data, idx1_data);
+ nnsearch(batchsize, m, n, xyz2_data, xyz1_data, dist2_data, idx2_data);
+}
+
+
+void chamfer_distance_backward(
+ const at::Tensor xyz1,
+ const at::Tensor xyz2,
+ at::Tensor gradxyz1,
+ at::Tensor gradxyz2,
+ at::Tensor graddist1,
+ at::Tensor graddist2,
+ at::Tensor idx1,
+ at::Tensor idx2)
+{
+ const int b = xyz1.size(0);
+ const int n = xyz1.size(1);
+ const int m = xyz2.size(1);
+
+ const float* xyz1_data = xyz1.data();
+ const float* xyz2_data = xyz2.data();
+ float* gradxyz1_data = gradxyz1.data();
+ float* gradxyz2_data = gradxyz2.data();
+ float* graddist1_data = graddist1.data();
+ float* graddist2_data = graddist2.data();
+ const int* idx1_data = idx1.data();
+ const int* idx2_data = idx2.data();
+
+ for (int i = 0; i < b*n*3; i++)
+ gradxyz1_data[i] = 0;
+ for (int i = 0; i < b*m*3; i++)
+ gradxyz2_data[i] = 0;
+ for (int i = 0;i < b; i++) {
+ for (int j = 0; j < n; j++) {
+ const float x1 = xyz1_data[(i*n+j)*3+0];
+ const float y1 = xyz1_data[(i*n+j)*3+1];
+ const float z1 = xyz1_data[(i*n+j)*3+2];
+ const int j2 = idx1_data[i*n+j];
+
+ const float x2 = xyz2_data[(i*m+j2)*3+0];
+ const float y2 = xyz2_data[(i*m+j2)*3+1];
+ const float z2 = xyz2_data[(i*m+j2)*3+2];
+ const float g = graddist1_data[i*n+j]*2;
+
+ gradxyz1_data[(i*n+j)*3+0] += g*(x1-x2);
+ gradxyz1_data[(i*n+j)*3+1] += g*(y1-y2);
+ gradxyz1_data[(i*n+j)*3+2] += g*(z1-z2);
+ gradxyz2_data[(i*m+j2)*3+0] -= (g*(x1-x2));
+ gradxyz2_data[(i*m+j2)*3+1] -= (g*(y1-y2));
+ gradxyz2_data[(i*m+j2)*3+2] -= (g*(z1-z2));
+ }
+ for (int j = 0; j < m; j++) {
+ const float x1 = xyz2_data[(i*m+j)*3+0];
+ const float y1 = xyz2_data[(i*m+j)*3+1];
+ const float z1 = xyz2_data[(i*m+j)*3+2];
+ const int j2 = idx2_data[i*m+j];
+ const float x2 = xyz1_data[(i*n+j2)*3+0];
+ const float y2 = xyz1_data[(i*n+j2)*3+1];
+ const float z2 = xyz1_data[(i*n+j2)*3+2];
+ const float g = graddist2_data[i*m+j]*2;
+ gradxyz2_data[(i*m+j)*3+0] += g*(x1-x2);
+ gradxyz2_data[(i*m+j)*3+1] += g*(y1-y2);
+ gradxyz2_data[(i*m+j)*3+2] += g*(z1-z2);
+ gradxyz1_data[(i*n+j2)*3+0] -= (g*(x1-x2));
+ gradxyz1_data[(i*n+j2)*3+1] -= (g*(y1-y2));
+ gradxyz1_data[(i*n+j2)*3+2] -= (g*(z1-z2));
+ }
+ }
+}
+
+
+PYBIND11_MODULE(TORCH_EXTENSION_NAME, m) {
+ m.def("forward", &chamfer_distance_forward, "ChamferDistance forward");
+ m.def("forward_cuda", &chamfer_distance_forward_cuda, "ChamferDistance forward (CUDA)");
+ m.def("backward", &chamfer_distance_backward, "ChamferDistance backward");
+ m.def("backward_cuda", &chamfer_distance_backward_cuda, "ChamferDistance backward (CUDA)");
+}
diff --git a/zoo/OcCo/OcCo_Torch/chamfer_distance/chamfer_distance.cu b/zoo/OcCo/OcCo_Torch/chamfer_distance/chamfer_distance.cu
new file mode 100644
index 0000000..f10f2ba
--- /dev/null
+++ b/zoo/OcCo/OcCo_Torch/chamfer_distance/chamfer_distance.cu
@@ -0,0 +1,209 @@
+#include
+
+#include
+#include
+
+__global__
+void ChamferDistanceKernel(
+ int b,
+ int n,
+ const float* xyz,
+ int m,
+ const float* xyz2,
+ float* result,
+ int* result_i)
+{
+ const int batch=512;
+ __shared__ float buf[batch*3];
+ for (int i=blockIdx.x;ibest){
+ result[(i*n+j)]=best;
+ result_i[(i*n+j)]=best_i;
+ }
+ }
+ __syncthreads();
+ }
+ }
+}
+
+void ChamferDistanceKernelLauncher(
+ const int b, const int n,
+ const float* xyz,
+ const int m,
+ const float* xyz2,
+ float* result,
+ int* result_i,
+ float* result2,
+ int* result2_i)
+{
+ ChamferDistanceKernel<<>>(b, n, xyz, m, xyz2, result, result_i);
+ ChamferDistanceKernel<<>>(b, m, xyz2, n, xyz, result2, result2_i);
+
+ cudaError_t err = cudaGetLastError();
+ if (err != cudaSuccess)
+ printf("error in chamfer distance updateOutput: %s\n", cudaGetErrorString(err));
+}
+
+
+__global__
+void ChamferDistanceGradKernel(
+ int b, int n,
+ const float* xyz1,
+ int m,
+ const float* xyz2,
+ const float* grad_dist1,
+ const int* idx1,
+ float* grad_xyz1,
+ float* grad_xyz2)
+{
+ for (int i = blockIdx.x; i>>(b, n, xyz1, m, xyz2, grad_dist1, idx1, grad_xyz1, grad_xyz2);
+ ChamferDistanceGradKernel<<>>(b, m, xyz2, n, xyz1, grad_dist2, idx2, grad_xyz2, grad_xyz1);
+
+ cudaError_t err = cudaGetLastError();
+ if (err != cudaSuccess)
+ printf("error in chamfer distance get grad: %s\n", cudaGetErrorString(err));
+}
diff --git a/zoo/OcCo/OcCo_Torch/chamfer_distance/chamfer_distance.py b/zoo/OcCo/OcCo_Torch/chamfer_distance/chamfer_distance.py
new file mode 100644
index 0000000..0db61e8
--- /dev/null
+++ b/zoo/OcCo/OcCo_Torch/chamfer_distance/chamfer_distance.py
@@ -0,0 +1,97 @@
+# Ref: https://github.com/chrdiller/pyTorchChamferDistance
+import os, torch, torch.nn as nn
+from torch.utils.cpp_extension import load
+
+basedir = os.path.dirname(__file__)
+cd = load(name="cd", sources=[
+ os.path.join(basedir, "chamfer_distance.cpp"),
+ os.path.join(basedir, "chamfer_distance.cu")])
+
+class ChamferDistanceFunction(torch.autograd.Function):
+ @staticmethod
+ def forward(ctx, xyz1, xyz2):
+ batchsize, n, _ = xyz1.size()
+ _, m, _ = xyz2.size()
+ xyz1 = xyz1.contiguous()
+ xyz2 = xyz2.contiguous()
+ dist1 = torch.zeros(batchsize, n)
+ dist2 = torch.zeros(batchsize, m)
+
+ idx1 = torch.zeros(batchsize, n, dtype=torch.int)
+ idx2 = torch.zeros(batchsize, m, dtype=torch.int)
+
+ if not xyz1.is_cuda:
+ cd.forward(xyz1, xyz2, dist1, dist2, idx1, idx2)
+ else:
+ dist1 = dist1.cuda()
+ dist2 = dist2.cuda()
+ idx1 = idx1.cuda()
+ idx2 = idx2.cuda()
+ cd.forward_cuda(xyz1, xyz2, dist1, dist2, idx1, idx2)
+
+ ctx.save_for_backward(xyz1, xyz2, idx1, idx2)
+
+ return dist1, dist2
+
+ @staticmethod
+ def backward(ctx, graddist1, graddist2):
+ xyz1, xyz2, idx1, idx2 = ctx.saved_tensors
+
+ graddist1 = graddist1.contiguous()
+ graddist2 = graddist2.contiguous()
+
+ gradxyz1 = torch.zeros(xyz1.size())
+ gradxyz2 = torch.zeros(xyz2.size())
+
+ if not graddist1.is_cuda:
+ cd.backward(xyz1, xyz2, gradxyz1, gradxyz2, graddist1, graddist2, idx1, idx2)
+ else:
+ gradxyz1 = gradxyz1.cuda()
+ gradxyz2 = gradxyz2.cuda()
+ cd.backward_cuda(xyz1, xyz2, gradxyz1, gradxyz2, graddist1, graddist2, idx1, idx2)
+
+ return gradxyz1, gradxyz2
+
+
+class ChamferDistance(nn.Module):
+ def forward(self, xyz1, xyz2):
+ return ChamferDistanceFunction.apply(xyz1, xyz2)
+
+
+class get_model(nn.Module):
+ def __init__(self, channel=3):
+ super(get_model, self).__init__()
+
+ self.conv1 = nn.Conv1d(channel, 128, 1)
+
+ def forward(self, x):
+ _, D, N = x.size()
+ x = self.conv1(x)
+ x = x.view(-1, 128, 1).repeat(1, 1, 3)
+ return x
+
+
+if __name__ == '__main__':
+
+ import random, numpy as np
+
+ '''Sanity Check on the Consistency with TensorFlow'''
+ random.seed(100)
+ np.random.seed(100)
+
+ chamfer_dist = ChamferDistance()
+ # model = get_model().to(torch.device("cuda"))
+ # model.train()
+
+ xyz1 = np.random.randn(32, 16384, 3).astype('float32')
+ xyz2 = np.random.randn(32, 1024, 3).astype('float32')
+
+ # pdb.set_trace()
+ # pc1 = torch.randn(1, 100, 3).cuda().contiguous()
+ # pc1_new = model(pc1.transpose(2, 1))
+ # pc2 = torch.randn(1, 50, 3).cuda().contiguous()
+
+ dist1, dist2 = chamfer_dist(torch.Tensor(xyz1), torch.Tensor(xyz2))
+ loss = (torch.mean(dist1)) + (torch.mean(dist2))
+ print(loss)
+ # loss.backward()
diff --git a/zoo/OcCo/OcCo_Torch/chamfer_distance/readme.md b/zoo/OcCo/OcCo_Torch/chamfer_distance/readme.md
new file mode 100644
index 0000000..5ef6244
--- /dev/null
+++ b/zoo/OcCo/OcCo_Torch/chamfer_distance/readme.md
@@ -0,0 +1,23 @@
+# Chamfer Distance for PyTorch
+
+This is an implementation of the Chamfer Distance as a module for PyTorch. It is written as a custom C++/CUDA extension. It is developed by [Chris](https://github.com/chrdiller/pyTorchChamferDistance) at TUM.
+
+As it is using PyTorch's [JIT compilation](https://pytorch.org/tutorials/advanced/cpp_extension.html), there are no additional prerequisite steps (e.g., `build` or `setup`) that have to be taken. Simply import the module as shown below, CUDA and C++ code will be compiled on the first run, which additionally takes a few seconds.
+
+### Usage
+```python
+import torch
+from chamfer_distance import ChamferDistance
+chamfer_dist = ChamferDistance()
+
+# both points clouds have shapes of (batch_size, n_points, 3), wherer n_points can be different
+
+dist1, dist2 = chamfer_dist(points, points_reconstructed)
+loss = (torch.mean(torch.sqrt(dist1)) + torch.mean(torch.sqrt(dist2)))/2
+```
+
+### Integration
+This code has been integrated into the [Kaolin](https://github.com/NVIDIAGameWorks/kaolin) library for 3D Deep Learning by NVIDIAGameWorks. You probably want to take a look at it if you are working on some 3D ([pytorch3d](https://github.com/facebookresearch/pytorch3d) is also recommended)
+
+### Earth Mover Distance
+For the implementation of earth mover distance, we recommend [Kaichun's](https://github.com/daerduoCarey/PyTorchEMD) :)
diff --git a/zoo/OcCo/OcCo_Torch/data/readme.md b/zoo/OcCo/OcCo_Torch/data/readme.md
new file mode 100644
index 0000000..ed62802
--- /dev/null
+++ b/zoo/OcCo/OcCo_Torch/data/readme.md
@@ -0,0 +1,54 @@
+## Data Setup
+
+#### OcCo
+
+We construct the training data based on ModelNet in the same format of the [data](https://drive.google.com/drive/folders/1M_lJN14Ac1RtPtEQxNlCV9e8pom3U6Pa) provided in PCN which is based on ShapeNet. **You can find our generated dataset based on ModelNet40 [here](https://drive.google.com/drive/folders/1gXNcARYxAh8I4UskbDprJ5fkbDSKPAsH?usp=sharing)**, this is similar with the resources used in the PCN and its follow-ups (summarised [here](https://github.com/hansen7/OcCo/issues/2)).
+
+If you want to generate your own data, please check our provided instructions from render/readme.md.
+
+
+
+#### Classification
+
+In the classification tasks, we use the following benchmark datasets:
+
+- `ModelNet10`[[link](http://vision.princeton.edu/projects/2014/3DShapeNets/ModelNet10.zip)]
+
+- `ModelNet40`[[link](https://shapenet.cs.stanford.edu/media/modelnet40_ply_hdf5_2048.zip)]
+
+- `ShapeNet10` and `ScanNet10` are from [PointDAN](https://github.com/canqin001/PointDAN)]
+
+- `ScanObjectNN` are obtained via enquiry to the author of [[paper](https://arxiv.org/abs/1908.04616)]
+
+- `ShapeNet/ModelNet Occluded` are generated via `utils/lmdb2hdf5.py` on the OcCo pre-trained data:
+
+ ```bash
+ python lmdb2hdf5.py \
+ --partial \
+ --num_scan 10 \
+ --fname train \
+ --lmdb_path ../data/modelnet40_pcn \
+ --hdf5_path ../data/modelnet40/hdf5_partial_1024 ;
+ ```
+
+For `ModelNet40`, we noticed that this newer [source](https://shapenet.cs.stanford.edu/media/modelnet40_normal_resampled.zip) provided in PointNet++ will result in performance gains, yet we stick to the original data used in the PointNet and DGCNN to make a fair comparison.
+
+
+
+#### Semantic Segmentation
+
+We use the provided S3DIS [data](https://github.com/charlesq34/pointnet/blob/master/sem_seg/download_data.sh) from PointNet, which is also used in DGCNN.
+
+Please see [here](https://github.com/charlesq34/pointnet/blob/master/sem_seg/download_data.sh) for the download details, it is worth mentioning that if you download from the original S3DIS and preprocess via utils/collect_indoor3d_data.pyΒ and utils/gen_indoor3d_h5.py, you need to delete an extra symbol in the raw file ([reference](https://github.com/charlesq34/pointnet/issues/45)).
+
+
+
+#### Part Segmentation
+
+we use the data provided in the PointNet, which is also used in DGCNN.
+
+
+
+#### Jigsaw Puzzles
+
+Please check `utils/3DPC_Data_Gen.py` for details, as well as the original paper.
diff --git a/zoo/OcCo/OcCo_Torch/data/shapenet_names.json b/zoo/OcCo/OcCo_Torch/data/shapenet_names.json
new file mode 100644
index 0000000..9f07300
--- /dev/null
+++ b/zoo/OcCo/OcCo_Torch/data/shapenet_names.json
@@ -0,0 +1,10 @@
+{
+ "02691156": 0,
+ "02933112": 1,
+ "02958343": 2,
+ "03001627": 3,
+ "03636649": 4,
+ "04256520": 5,
+ "04379243": 6,
+ "04530566": 7
+ }
diff --git a/zoo/OcCo/OcCo_Torch/docker/.dockerignore b/zoo/OcCo/OcCo_Torch/docker/.dockerignore
new file mode 100644
index 0000000..814dc9b
--- /dev/null
+++ b/zoo/OcCo/OcCo_Torch/docker/.dockerignore
@@ -0,0 +1,3 @@
+*/data
+*/log
+*/__pycache__
diff --git a/zoo/OcCo/OcCo_Torch/docker/Dockerfile_Torch b/zoo/OcCo/OcCo_Torch/docker/Dockerfile_Torch
new file mode 100644
index 0000000..e5397a7
--- /dev/null
+++ b/zoo/OcCo/OcCo_Torch/docker/Dockerfile_Torch
@@ -0,0 +1,29 @@
+# https://github.com/pytorch/pytorch/issues/31171#issuecomment-565887573
+FROM pytorch/pytorch:1.3-cuda10.1-cudnn7-devel
+
+WORKDIR /workspace/OcCo_Torch
+RUN apt-get update
+RUN apt-get -y install apt-file apt-utils
+RUN apt-file update
+RUN apt-get -y install build-essential libcap-dev vim screen
+COPY ./Requirements_Torch.txt /workspace/OcCo_Torch
+RUN pip install -r Requirements_Torch.txt
+
+RUN mkdir /home/hcw
+RUN chmod -R 777 /home/hcw
+RUN chmod 777 /usr/bin
+RUN chmod 777 /bin
+RUN chmod 777 /usr/local/
+RUN apt-get -y update
+RUN apt-get -y install libgl1-mesa-glx
+
+# RUN apt-get -y install gcc
+# RUN apt-get -y install g++
+# RUN apt-get -y upgrade libstdc++6
+
+# Optional: Install the TensorRT runtime (must be after CUDA install)
+# RUN apt update
+# RUN apt -y install libnvinfer4=4.1.2-1+cuda9.0
+
+RUN useradd hcw
+WORKDIR /workspace/OcCo_Torch
diff --git a/zoo/OcCo/OcCo_Torch/docker/build_docker_torch.sh b/zoo/OcCo/OcCo_Torch/docker/build_docker_torch.sh
new file mode 100644
index 0000000..c24d176
--- /dev/null
+++ b/zoo/OcCo/OcCo_Torch/docker/build_docker_torch.sh
@@ -0,0 +1,2 @@
+#!/bin/bash
+docker build ../ --rm -t occo_torch -f ./Dockerfile_Torch
diff --git a/zoo/OcCo/OcCo_Torch/docker/launch_docker_torch.sh b/zoo/OcCo/OcCo_Torch/docker/launch_docker_torch.sh
new file mode 100644
index 0000000..75fb524
--- /dev/null
+++ b/zoo/OcCo/OcCo_Torch/docker/launch_docker_torch.sh
@@ -0,0 +1,15 @@
+#!/bin/bash
+
+docker run -it \
+ --rm \
+ --shm-size=1g \
+ --runtime=nvidia \
+ --ulimit memlock=-1 \
+ --ulimit stack=67108864 \
+ -v "$(dirname $PWD):/workspace/OcCo_Torch" \
+ -v "/scratch/hw501/data_source/:/scratch/hw501/data_source/" \
+ -v "/scratches/mario/hw501/data_source:/scratches/mario/hw501/data_source/" \
+ -v "/scratches/weatherwax_2/hwang/OcCo/data/:/scratches/weatherwax_2/hwang/OcCo/data/" \
+ occo_torch bash
+
+# -v + any external directories if you are using them
diff --git a/zoo/OcCo/OcCo_Torch/models/dgcnn_cls.py b/zoo/OcCo/OcCo_Torch/models/dgcnn_cls.py
new file mode 100644
index 0000000..f2155d1
--- /dev/null
+++ b/zoo/OcCo/OcCo_Torch/models/dgcnn_cls.py
@@ -0,0 +1,97 @@
+# Copyright (c) 2020. Hanchen Wang, hw501@cam.ac.uk
+# Ref: https://github.com/WangYueFt/dgcnn/blob/master/pytorch/model.py
+
+import torch, torch.nn as nn, torch.nn.functional as F
+from dgcnn_util import get_graph_feature
+
+class get_model(nn.Module):
+
+ def __init__(self, args, num_channel=3, num_class=40, **kwargs):
+ super(get_model, self).__init__()
+ self.args = args
+ self.bn1 = nn.BatchNorm2d(64)
+ self.bn2 = nn.BatchNorm2d(64)
+ self.bn3 = nn.BatchNorm2d(128)
+ self.bn4 = nn.BatchNorm2d(256)
+ self.bn5 = nn.BatchNorm1d(args.emb_dims)
+
+ self.conv1 = nn.Sequential(nn.Conv2d(num_channel*2, 64, kernel_size=1, bias=False),
+ self.bn1,
+ nn.LeakyReLU(negative_slope=0.2))
+ self.conv2 = nn.Sequential(nn.Conv2d(64*2, 64, kernel_size=1, bias=False),
+ self.bn2,
+ nn.LeakyReLU(negative_slope=0.2))
+ self.conv3 = nn.Sequential(nn.Conv2d(64*2, 128, kernel_size=1, bias=False),
+ self.bn3,
+ nn.LeakyReLU(negative_slope=0.2))
+ self.conv4 = nn.Sequential(nn.Conv2d(128*2, 256, kernel_size=1, bias=False),
+ self.bn4,
+ nn.LeakyReLU(negative_slope=0.2))
+ self.conv5 = nn.Sequential(nn.Conv1d(512, args.emb_dims, kernel_size=1, bias=False),
+ self.bn5,
+ nn.LeakyReLU(negative_slope=0.2))
+
+ self.linear1 = nn.Linear(args.emb_dims*2, 512, bias=False)
+ self.bn6 = nn.BatchNorm1d(512)
+ self.dp1 = nn.Dropout(p=args.dropout)
+ self.linear2 = nn.Linear(512, 256)
+ self.bn7 = nn.BatchNorm1d(256)
+ self.dp2 = nn.Dropout(p=args.dropout)
+ self.linear3 = nn.Linear(256, num_class)
+
+ def forward(self, x):
+ batch_size = x.size()[0]
+ x = get_graph_feature(x, k=self.args.k)
+ x = self.conv1(x)
+ x1 = x.max(dim=-1, keepdim=False)[0]
+
+ x = get_graph_feature(x1, k=self.args.k)
+ x = self.conv2(x)
+ x2 = x.max(dim=-1, keepdim=False)[0]
+
+ x = get_graph_feature(x2, k=self.args.k)
+ x = self.conv3(x)
+ x3 = x.max(dim=-1, keepdim=False)[0]
+
+ x = get_graph_feature(x3, k=self.args.k)
+ x = self.conv4(x)
+ x4 = x.max(dim=-1, keepdim=False)[0]
+
+ x = torch.cat((x1, x2, x3, x4), dim=1)
+
+ x = self.conv5(x)
+ x1 = F.adaptive_max_pool1d(x, 1).view(batch_size, -1)
+ x2 = F.adaptive_avg_pool1d(x, 1).view(batch_size, -1)
+ x = torch.cat((x1, x2), 1)
+
+ x = F.leaky_relu(self.bn6(self.linear1(x)), negative_slope=0.2)
+ x = self.dp1(x)
+ x = F.leaky_relu(self.bn7(self.linear2(x)), negative_slope=0.2)
+ x = self.dp2(x)
+ x = self.linear3(x)
+ return x
+
+
+class get_loss(torch.nn.Module):
+ def __init__(self):
+ super(get_loss, self).__init__()
+
+ @staticmethod
+ def cal_loss(pred, gold, smoothing=True):
+ """Calculate cross entropy loss, apply label smoothing if needed."""
+ gold = gold.contiguous().view(-1)
+
+ if smoothing:
+ eps = 0.2
+ n_class = pred.size()[1]
+ one_hot = torch.zeros_like(pred).scatter(1, gold.view(-1, 1), 1) # (num_points, num_class)
+ one_hot = one_hot * (1 - eps) + (1 - one_hot) * eps / (n_class - 1)
+ log_prb = F.log_softmax(pred, dim=1)
+ loss = -(one_hot * log_prb).sum(dim=1).mean() # ~ F.nll_loss(log_prb, gold)
+ else:
+ loss = F.cross_entropy(pred, gold, reduction='mean')
+
+ return loss
+
+ def forward(self, pred, target):
+ return self.cal_loss(pred, target, smoothing=True)
diff --git a/zoo/OcCo/OcCo_Torch/models/dgcnn_jigsaw.py b/zoo/OcCo/OcCo_Torch/models/dgcnn_jigsaw.py
new file mode 100644
index 0000000..714fcef
--- /dev/null
+++ b/zoo/OcCo/OcCo_Torch/models/dgcnn_jigsaw.py
@@ -0,0 +1,112 @@
+# Copyright (c) 2020. Hanchen Wang, hw501@cam.ac.uk
+# Ref: https://github.com/AnTao97/dgcnn.pytorch/blob/master/model.py
+
+import torch, torch.nn as nn, torch.nn.functional as F
+from dgcnn_util import get_graph_feature
+
+
+class get_model(nn.Module):
+ def __init__(self, args, num_class, **kwargs):
+ super(get_model, self).__init__()
+ self.args = args
+ self.k = args.k
+
+ self.bn1 = nn.BatchNorm2d(64)
+ self.bn2 = nn.BatchNorm2d(64)
+ self.bn3 = nn.BatchNorm2d(64)
+ self.bn4 = nn.BatchNorm2d(64)
+ self.bn5 = nn.BatchNorm2d(64)
+ self.bn6 = nn.BatchNorm1d(args.emb_dims)
+ self.bn7 = nn.BatchNorm1d(512)
+ self.bn8 = nn.BatchNorm1d(256)
+
+ self.conv1 = nn.Sequential(nn.Conv2d(6, 64, kernel_size=1, bias=False),
+ self.bn1,
+ nn.LeakyReLU(negative_slope=0.2))
+ self.conv2 = nn.Sequential(nn.Conv2d(64, 64, kernel_size=1, bias=False),
+ self.bn2,
+ nn.LeakyReLU(negative_slope=0.2))
+ self.conv3 = nn.Sequential(nn.Conv2d(64*2, 64, kernel_size=1, bias=False),
+ self.bn3,
+ nn.LeakyReLU(negative_slope=0.2))
+ self.conv4 = nn.Sequential(nn.Conv2d(64, 64, kernel_size=1, bias=False),
+ self.bn4,
+ nn.LeakyReLU(negative_slope=0.2))
+ self.conv5 = nn.Sequential(nn.Conv2d(64*2, 64, kernel_size=1, bias=False),
+ self.bn5,
+ nn.LeakyReLU(negative_slope=0.2))
+ self.conv6 = nn.Sequential(nn.Conv1d(192, args.emb_dims, kernel_size=1, bias=False),
+ self.bn6,
+ nn.LeakyReLU(negative_slope=0.2))
+ self.conv7 = nn.Sequential(nn.Conv1d(1216, 512, kernel_size=1, bias=False),
+ self.bn7,
+ nn.LeakyReLU(negative_slope=0.2))
+ self.conv8 = nn.Sequential(nn.Conv1d(512, 256, kernel_size=1, bias=False),
+ self.bn8,
+ nn.LeakyReLU(negative_slope=0.2))
+ self.dp1 = nn.Dropout(p=args.dropout)
+ self.conv9 = nn.Conv1d(256, num_class, kernel_size=1, bias=False)
+
+ def forward(self, x):
+
+ batch_size, _, num_points = x.size()
+
+ x = get_graph_feature(x, self.k)
+ x = self.conv1(x)
+ x = self.conv2(x)
+ x1 = x.max(dim=-1, keepdim=False)[0]
+
+ x = get_graph_feature(x1, k=self.k)
+ x = self.conv3(x)
+ x = self.conv4(x)
+ x2 = x.max(dim=-1, keepdim=False)[0]
+
+ x = get_graph_feature(x2, k=self.k)
+ x = self.conv5(x)
+ x3 = x.max(dim=-1, keepdim=False)[0]
+
+ x = torch.cat((x1, x2, x3), dim=1)
+
+ x = self.conv6(x)
+ x = x.max(dim=-1, keepdim=True)[0]
+
+ x = x.repeat(1, 1, num_points)
+ x = torch.cat((x, x1, x2, x3), dim=1)
+
+ x = self.conv7(x)
+ x = self.conv8(x)
+ x = self.dp1(x)
+ x = self.conv9(x)
+ # x = F.softmax(x, dim=1)
+ # x = F.log_softmax(x, dim=1)
+ '''add softmax:
+ https://towardsdatascience.com/cuda-error-device-side-assert-triggered-c6ae1c8fa4c3
+ https://github.com/pytorch/pytorch/issues/1204
+ '''
+ return x
+
+
+class get_loss(torch.nn.Module):
+ def __init__(self):
+ super(get_loss, self).__init__()
+
+ @staticmethod
+ def cal_loss(pred, gold, smoothing=False):
+ """Calculate cross entropy loss, apply label smoothing if needed."""
+
+ gold = gold.contiguous().view(-1)
+
+ if smoothing:
+ eps = 0.2
+ n_class = pred.size(1)
+ one_hot = torch.zeros_like(pred).scatter(1, gold.view(-1, 1), 1)
+ one_hot = one_hot * (1 - eps) + (1 - one_hot) * eps / (n_class - 1)
+ log_prb = F.log_softmax(pred, dim=1)
+ loss = -(one_hot * log_prb).sum(dim=1).mean() # ~ F.nll_loss(log_prb, gold)
+ else:
+ loss = F.cross_entropy(pred, gold, reduction='mean')
+
+ return loss
+
+ def forward(self, pred, target):
+ return self.cal_loss(pred, target, smoothing=False)
diff --git a/zoo/OcCo/OcCo_Torch/models/dgcnn_occo.py b/zoo/OcCo/OcCo_Torch/models/dgcnn_occo.py
new file mode 100644
index 0000000..9aa92c8
--- /dev/null
+++ b/zoo/OcCo/OcCo_Torch/models/dgcnn_occo.py
@@ -0,0 +1,159 @@
+# Copyright (c) 2020. Hanchen Wang, hw501@cam.ac.uk
+# Ref: https://github.com/wentaoyuan/pcn/blob/master/models/pcn_cd.py
+# Ref: https://github.com/AnTao97/UnsupervisedPointCloudReconstruction/blob/master/model.py
+
+import sys, torch, itertools, numpy as np, torch.nn as nn, torch.nn.functional as F
+from dgcnn_util import get_graph_feature
+sys.path.append("../chamfer_distance")
+from chamfer_distance import ChamferDistance
+
+
+class get_model(nn.Module):
+ def __init__(self, **kwargs):
+ super(get_model, self).__init__()
+
+ self.grid_size = 4
+ self.grid_scale = 0.05
+ self.num_coarse = 1024
+ self.device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
+ self.__dict__.update(kwargs) # to update args, num_coarse, grid_size, grid_scale
+
+ self.num_fine = self.grid_size ** 2 * self.num_coarse # 16384
+ self.meshgrid = [[-self.grid_scale, self.grid_scale, self.grid_size],
+ [-self.grid_scale, self.grid_scale, self.grid_size]]
+
+ self.bn1 = nn.BatchNorm2d(64)
+ self.bn2 = nn.BatchNorm2d(64)
+ self.bn3 = nn.BatchNorm2d(128)
+ self.bn4 = nn.BatchNorm2d(256)
+ self.bn5 = nn.BatchNorm1d(self.args.emb_dims)
+
+ self.conv1 = nn.Sequential(nn.Conv2d(6, 64, kernel_size=1, bias=False),
+ self.bn1,
+ nn.LeakyReLU(negative_slope=0.2))
+ self.conv2 = nn.Sequential(nn.Conv2d(64*2, 64, kernel_size=1, bias=False),
+ self.bn2,
+ nn.LeakyReLU(negative_slope=0.2))
+ self.conv3 = nn.Sequential(nn.Conv2d(64*2, 128, kernel_size=1, bias=False),
+ self.bn3,
+ nn.LeakyReLU(negative_slope=0.2))
+ self.conv4 = nn.Sequential(nn.Conv2d(128*2, 256, kernel_size=1, bias=False),
+ self.bn4,
+ nn.LeakyReLU(negative_slope=0.2))
+ self.conv5 = nn.Sequential(nn.Conv1d(512, self.args.emb_dims, kernel_size=1, bias=False),
+ self.bn5,
+ nn.LeakyReLU(negative_slope=0.2))
+
+ self.folding1 = nn.Sequential(
+ nn.Linear(self.args.emb_dims, 1024),
+ nn.ReLU(),
+ nn.Linear(1024, 1024),
+ nn.ReLU(),
+ nn.Linear(1024, self.num_coarse * 3))
+
+ self.folding2 = nn.Sequential(
+ nn.Conv1d(1024+2+3, 512, 1),
+ nn.ReLU(),
+ nn.Conv1d(512, 512, 1),
+ nn.ReLU(),
+ nn.Conv1d(512, 3, 1))
+
+ def build_grid(self, batch_size):
+
+ x, y = np.linspace(*self.meshgrid[0]), np.linspace(*self.meshgrid[1])
+ points = np.array(list(itertools.product(x, y)))
+ points = np.repeat(points[np.newaxis, ...], repeats=batch_size, axis=0)
+
+ return torch.tensor(points).float().to(self.device)
+
+ def tile(self, tensor, multiples):
+ # substitute for tf.tile:
+ # https://www.tensorflow.org/versions/r1.15/api_docs/python/tf/tile
+ # Ref: https://discuss.pytorch.org/t/how-to-tile-a-tensor/13853/3
+ def tile_single_axis(a, dim, n_tile):
+ init_dim = a.size(dim)
+ repeat_idx = [1] * a.dim()
+ repeat_idx[dim] = n_tile
+ a = a.repeat(*repeat_idx)
+ order_index = torch.Tensor(
+ np.concatenate([init_dim * np.arange(n_tile) + i for i in range(init_dim)])).long()
+ return torch.index_select(a, dim, order_index.to(self.device))
+
+ for dim, n_tile in enumerate(multiples):
+ if n_tile == 1:
+ continue
+ tensor = tile_single_axis(tensor, dim, n_tile)
+ return tensor
+
+ @staticmethod
+ def expand_dims(tensor, dim):
+ # substitute for tf.expand_dims:
+ # https://www.tensorflow.org/versions/r1.15/api_docs/python/tf/expand_dims
+ return tensor.unsqueeze(-1).transpose(-1, dim)
+
+ def forward(self, x):
+
+ batch_size = x.size()[0]
+ x = get_graph_feature(x, k=self.args.k)
+ x = self.conv1(x)
+ x1 = x.max(dim=-1, keepdim=False)[0]
+
+ x = get_graph_feature(x1, k=self.args.k)
+ x = self.conv2(x)
+ x2 = x.max(dim=-1, keepdim=False)[0]
+
+ x = get_graph_feature(x2, k=self.args.k)
+ x = self.conv3(x)
+ x3 = x.max(dim=-1, keepdim=False)[0]
+
+ x = get_graph_feature(x3, k=self.args.k)
+ x = self.conv4(x)
+ x4 = x.max(dim=-1, keepdim=False)[0]
+
+ x = torch.cat((x1, x2, x3, x4), dim=1)
+ x = self.conv5(x)
+ feature = F.adaptive_max_pool1d(x, 1).view(batch_size, -1)
+ # x1 = F.adaptive_max_pool1d(x, 1).view(batch_size, -1)
+ # x2 = F.adaptive_avg_pool1d(x, 1).view(batch_size, -1)
+ # feature = torch.cat((x1, x2), 1)
+
+ coarse = self.folding1(feature)
+ coarse = coarse.view(-1, self.num_coarse, 3)
+
+ grid = self.build_grid(x.size()[0])
+ grid_feat = grid.repeat(1, self.num_coarse, 1)
+
+ point_feat = self.tile(self.expand_dims(coarse, 2), [1, 1, self.grid_size ** 2, 1])
+ point_feat = point_feat.view([-1, self.num_fine, 3])
+
+ global_feat = self.tile(self.expand_dims(feature, 1), [1, self.num_fine, 1])
+ feat = torch.cat([grid_feat, point_feat, global_feat], dim=2)
+
+ center = self.tile(self.expand_dims(coarse, 2), [1, 1, self.grid_size ** 2, 1])
+ center = center.view([-1, self.num_fine, 3])
+
+ fine = self.folding2(feat.transpose(2, 1)).transpose(2, 1) + center
+
+ return coarse, fine
+
+
+class get_loss(nn.Module):
+ def __init__(self):
+ super(get_loss, self).__init__()
+
+ @staticmethod
+ def dist_cd(pc1, pc2):
+ chamfer_dist = ChamferDistance()
+ dist1, dist2 = chamfer_dist(pc1, pc2)
+ return (torch.mean(torch.sqrt(dist1)) + torch.mean(torch.sqrt(dist2)))/2
+
+ def forward(self, coarse, fine, gt, alpha):
+ return self.dist_cd(coarse, gt) + alpha * self.dist_cd(fine, gt)
+
+
+if __name__ == '__main__':
+
+ model = get_model()
+ print(model)
+ input_pc = torch.rand(7, 3, 1024)
+ x = model(input_pc)
diff --git a/zoo/OcCo/OcCo_Torch/models/dgcnn_partseg.py b/zoo/OcCo/OcCo_Torch/models/dgcnn_partseg.py
new file mode 100644
index 0000000..5909334
--- /dev/null
+++ b/zoo/OcCo/OcCo_Torch/models/dgcnn_partseg.py
@@ -0,0 +1,135 @@
+# Copyright (c) 2020. Hanchen Wang, hw501@cam.ac.uk
+# Ref: https://github.com/AnTao97/dgcnn.pytorch/blob/master/model.py
+# Ref: https://github.com/WangYueFt/dgcnn/blob/master/tensorflow/part_seg/train_multi_gpu.py
+
+import pdb, torch, torch.nn as nn, torch.nn.functional as F
+from dgcnn_util import get_graph_feature, T_Net
+
+
+class get_model(nn.Module):
+ def __init__(self, args, part_num=50, num_channel=3, **kwargs):
+ super(get_model, self).__init__()
+ self.k = args.k
+ self.part_num = part_num
+ self.transform_net = T_Net(channel=num_channel)
+
+ self.bn1 = nn.BatchNorm2d(64)
+ self.bn2 = nn.BatchNorm2d(64)
+ self.bn3 = nn.BatchNorm2d(64)
+ self.bn4 = nn.BatchNorm2d(64)
+ self.bn5 = nn.BatchNorm2d(64)
+ self.bn6 = nn.BatchNorm1d(args.emb_dims)
+ self.bn7 = nn.BatchNorm1d(64)
+ self.bn8 = nn.BatchNorm1d(256)
+ self.bn9 = nn.BatchNorm1d(256)
+ self.bn10 = nn.BatchNorm1d(128)
+
+ self.conv1 = nn.Sequential(nn.Conv2d(num_channel*2, 64, kernel_size=1, bias=False),
+ self.bn1,
+ nn.LeakyReLU(negative_slope=0.2))
+ self.conv2 = nn.Sequential(nn.Conv2d(64, 64, kernel_size=1, bias=False),
+ self.bn2,
+ nn.LeakyReLU(negative_slope=0.2))
+ self.conv3 = nn.Sequential(nn.Conv2d(64 * 2, 64, kernel_size=1, bias=False),
+ self.bn3,
+ nn.LeakyReLU(negative_slope=0.2))
+ self.conv4 = nn.Sequential(nn.Conv2d(64, 64, kernel_size=1, bias=False),
+ self.bn4,
+ nn.LeakyReLU(negative_slope=0.2))
+ self.conv5 = nn.Sequential(nn.Conv2d(64 * 2, 64, kernel_size=1, bias=False),
+ self.bn5,
+ nn.LeakyReLU(negative_slope=0.2))
+ self.conv6 = nn.Sequential(nn.Conv1d(192, args.emb_dims, kernel_size=1, bias=False),
+ self.bn6,
+ nn.LeakyReLU(negative_slope=0.2))
+ self.conv7 = nn.Sequential(nn.Conv1d(16, 64, kernel_size=1, bias=False),
+ self.bn7,
+ nn.LeakyReLU(negative_slope=0.2))
+ self.conv8 = nn.Sequential(nn.Conv1d(1280, 256, kernel_size=1, bias=False),
+ self.bn8,
+ nn.LeakyReLU(negative_slope=0.2))
+ self.dp1 = nn.Dropout(p=args.dropout)
+ self.conv9 = nn.Sequential(nn.Conv1d(256, 256, kernel_size=1, bias=False),
+ self.bn9,
+ nn.LeakyReLU(negative_slope=0.2))
+ self.dp2 = nn.Dropout(p=args.dropout)
+ self.conv10 = nn.Sequential(nn.Conv1d(256, 128, kernel_size=1, bias=False),
+ self.bn10,
+ nn.LeakyReLU(negative_slope=0.2))
+ self.conv11 = nn.Conv1d(128, self.part_num, kernel_size=1, bias=False)
+
+ def forward(self, x, l):
+ B, D, N = x.size()
+
+ x0 = get_graph_feature(x, k=self.k)
+ t = self.transform_net(x0)
+ x = x.transpose(2, 1)
+ if D > 3:
+ x, feature = x.split(3, dim=2)
+ x = torch.bmm(x, t)
+ if D > 3:
+ x = torch.cat([x, feature], dim=2)
+ x = x.transpose(2, 1)
+
+ x = get_graph_feature(x, k=self.k)
+ x = self.conv1(x)
+ x = self.conv2(x)
+ x1 = x.max(dim=-1, keepdim=False)[0]
+
+ x = get_graph_feature(x1, k=self.k)
+ x = self.conv3(x)
+ x = self.conv4(x)
+ x2 = x.max(dim=-1, keepdim=False)[0]
+
+ x = get_graph_feature(x2, k=self.k)
+ x = self.conv5(x)
+ x3 = x.max(dim=-1, keepdim=False)[0]
+
+ x = torch.cat((x1, x2, x3), dim=1)
+
+ x = self.conv6(x)
+ x = x.max(dim=-1, keepdim=True)[0]
+
+ l = l.view(B, -1, 1)
+ l = self.conv7(l)
+
+ x = torch.cat((x, l), dim=1)
+ x = x.repeat(1, 1, N)
+
+ x = torch.cat((x, x1, x2, x3), dim=1)
+
+ x = self.conv8(x)
+ x = self.dp1(x)
+ x = self.conv9(x)
+ x = self.dp2(x)
+ x = self.conv10(x)
+ x = self.conv11(x)
+
+ return x.permute(0, 2, 1).contiguous()
+
+
+class get_loss(nn.Module):
+ def __init__(self):
+ super(get_loss, self).__init__()
+
+ @staticmethod
+ def cal_loss(pred, gold, smoothing=False):
+ """Calculate cross entropy loss, apply label smoothing if needed."""
+
+ gold = gold.contiguous().view(-1)
+
+ if smoothing:
+ eps = 0.2
+ n_class = pred.size()[1]
+ one_hot = torch.zeros_like(pred).scatter(1, gold.view(-1, 1), 1)
+ one_hot = one_hot * (1 - eps) + (1 - one_hot) * eps / (n_class - 1)
+ log_prb = F.log_softmax(pred, dim=1)
+ loss = -(one_hot * log_prb).sum(dim=1).mean() # ~ F.nll_loss(log_prb, gold)
+ else:
+ loss = F.cross_entropy(pred, gold, reduction='mean')
+
+ return loss
+
+ def forward(self, pred, target):
+
+ return self.cal_loss(pred, target, smoothing=False)
diff --git a/zoo/OcCo/OcCo_Torch/models/dgcnn_semseg.py b/zoo/OcCo/OcCo_Torch/models/dgcnn_semseg.py
new file mode 100644
index 0000000..fc23e1c
--- /dev/null
+++ b/zoo/OcCo/OcCo_Torch/models/dgcnn_semseg.py
@@ -0,0 +1,107 @@
+# Copyright (c) 2020. Hanchen Wang, hw501@cam.ac.uk
+# Ref: https://github.com/AnTao97/dgcnn.pytorch/blob/master/model.py
+# Ref: https://github.com/WangYueFt/dgcnn/blob/master/tensorflow/sem_seg/train.py
+
+import torch, torch.nn as nn, torch.nn.functional as F
+from dgcnn_util import get_graph_feature
+
+
+class get_model(nn.Module):
+ def __init__(self, args, num_class, num_channel=9, **kwargs):
+ super(get_model, self).__init__()
+ self.k = args.k
+
+ self.bn1 = nn.BatchNorm2d(64)
+ self.bn2 = nn.BatchNorm2d(64)
+ self.bn3 = nn.BatchNorm2d(64)
+ self.bn4 = nn.BatchNorm2d(64)
+ self.bn5 = nn.BatchNorm2d(64)
+ self.bn6 = nn.BatchNorm1d(args.emb_dims)
+ self.bn7 = nn.BatchNorm1d(512)
+ self.bn8 = nn.BatchNorm1d(256)
+
+ self.conv1 = nn.Sequential(nn.Conv2d(num_channel*2, 64, kernel_size=1, bias=False),
+ self.bn1,
+ nn.LeakyReLU(negative_slope=0.2))
+ self.conv2 = nn.Sequential(nn.Conv2d(64, 64, kernel_size=1, bias=False),
+ self.bn2,
+ nn.LeakyReLU(negative_slope=0.2))
+ self.conv3 = nn.Sequential(nn.Conv2d(64*2, 64, kernel_size=1, bias=False),
+ self.bn3,
+ nn.LeakyReLU(negative_slope=0.2))
+ self.conv4 = nn.Sequential(nn.Conv2d(64, 64, kernel_size=1, bias=False),
+ self.bn4,
+ nn.LeakyReLU(negative_slope=0.2))
+ self.conv5 = nn.Sequential(nn.Conv2d(64*2, 64, kernel_size=1, bias=False),
+ self.bn5,
+ nn.LeakyReLU(negative_slope=0.2))
+ self.conv6 = nn.Sequential(nn.Conv1d(192, args.emb_dims, kernel_size=1, bias=False),
+ self.bn6,
+ nn.LeakyReLU(negative_slope=0.2))
+ self.conv7 = nn.Sequential(nn.Conv1d(1216, 512, kernel_size=1, bias=False),
+ self.bn7,
+ nn.LeakyReLU(negative_slope=0.2))
+ self.conv8 = nn.Sequential(nn.Conv1d(512, 256, kernel_size=1, bias=False),
+ self.bn8,
+ nn.LeakyReLU(negative_slope=0.2))
+ self.dp1 = nn.Dropout(p=args.dropout)
+ self.conv9 = nn.Conv1d(256, num_class, kernel_size=1, bias=False)
+
+ def forward(self, x):
+ batch_size, _, num_points = x.size()
+
+ x = get_graph_feature(x, self.k, extra_dim=True)
+ x = self.conv1(x)
+ x = self.conv2(x)
+ x1 = x.max(dim=-1, keepdim=False)[0]
+
+ x = get_graph_feature(x1, self.k)
+ x = self.conv3(x)
+ x = self.conv4(x)
+ x2 = x.max(dim=-1, keepdim=False)[0]
+
+ x = get_graph_feature(x2, self.k)
+ x = self.conv5(x)
+ x3 = x.max(dim=-1, keepdim=False)[0]
+
+ x = torch.cat((x1, x2, x3), dim=1)
+
+ x = self.conv6(x)
+ x = x.max(dim=-1, keepdim=True)[0]
+
+ x = x.repeat(1, 1, num_points)
+ x = torch.cat((x, x1, x2, x3), dim=1)
+
+ x = self.conv7(x)
+ x = self.conv8(x)
+ x = self.dp1(x)
+ x = self.conv9(x)
+
+ return x.permute(0, 2, 1).contiguous()
+
+
+class get_loss(nn.Module):
+ def __init__(self):
+ super(get_loss, self).__init__()
+
+ @staticmethod
+ def cal_loss(pred, gold, smoothing=False):
+ """Calculate cross entropy loss, apply label smoothing if needed."""
+
+ gold = gold.contiguous().view(-1)
+
+ if smoothing:
+ eps = 0.2
+ n_class = pred.size()[1]
+ one_hot = torch.zeros_like(pred).scatter(1, gold.view(-1, 1), 1)
+ one_hot = one_hot * (1 - eps) + (1 - one_hot) * eps / (n_class - 1)
+ log_prb = F.log_softmax(pred, dim=1)
+ loss = -(one_hot * log_prb).sum(dim=1).mean() # ~ F.nll_loss(log_prb, gold)
+ else:
+ loss = F.cross_entropy(pred, gold, reduction='mean')
+
+ return loss
+
+ def forward(self, pred, target):
+
+ return self.cal_loss(pred, target, smoothing=False)
diff --git a/zoo/OcCo/OcCo_Torch/models/dgcnn_util.py b/zoo/OcCo/OcCo_Torch/models/dgcnn_util.py
new file mode 100644
index 0000000..0be8e02
--- /dev/null
+++ b/zoo/OcCo/OcCo_Torch/models/dgcnn_util.py
@@ -0,0 +1,137 @@
+# Copyright (c) 2020. Hanchen Wang, hw501@cam.ac.uk
+# Ref: https://github.com/WangYueFt/dgcnn/blob/master/pytorch/model.py
+
+import torch, torch.nn as nn, torch.nn.init as init, torch.nn.functional as F
+
+def knn(x, k):
+ inner = -2 * torch.matmul(x.transpose(2, 1), x)
+ xx = torch.sum(x ** 2, dim=1, keepdim=True)
+ pairwise_distance = -xx - inner - xx.transpose(2, 1)
+ idx = pairwise_distance.topk(k=k, dim=-1)[1]
+ return idx
+
+
+def get_graph_feature(x, k=20, idx=None, extra_dim=False):
+
+ batch_size, num_dims, num_points = x.size()
+ x = x.view(batch_size, -1, num_points)
+ if idx is None:
+ if extra_dim is False:
+ idx = knn(x, k=k)
+ else:
+ idx = knn(x[:, 6:], k=k) # idx = knn(x[:, :3], k=k)
+
+ device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
+ idx_base = torch.arange(0, batch_size, device=device).view(-1, 1, 1) * num_points
+ idx += idx_base
+ idx = idx.view(-1)
+
+ x = x.transpose(2, 1).contiguous()
+ feature = x.view(batch_size*num_points, -1)[idx, :]
+ feature = feature.view(batch_size, num_points, k, num_dims)
+ x = x.view(batch_size, num_points, 1, num_dims).repeat(1, 1, k, 1)
+ feature = torch.cat((feature-x, x), dim=3).permute(0, 3, 1, 2)
+
+ return feature # (batch_size, 2 * num_dims, num_points, k)
+
+
+class T_Net(nn.Module):
+ """Similar to STN3d/STNkd in pointnet_util.py,
+ but with leaky relu and zero bias conv1d"""
+ def __init__(self, channel=3, k=3):
+ super(T_Net, self).__init__()
+ self.k = k
+ self.bn1 = nn.BatchNorm2d(64)
+ self.bn2 = nn.BatchNorm2d(128)
+ self.bn3 = nn.BatchNorm1d(1024)
+
+ self.conv1 = nn.Sequential(nn.Conv2d(channel*2, 64, kernel_size=1, bias=False),
+ self.bn1,
+ nn.LeakyReLU(negative_slope=0.2))
+ self.conv2 = nn.Sequential(nn.Conv2d(64, 128, kernel_size=1, bias=False),
+ self.bn2,
+ nn.LeakyReLU(negative_slope=0.2))
+ self.conv3 = nn.Sequential(nn.Conv1d(128, 1024, kernel_size=1, bias=False),
+ self.bn3,
+ nn.LeakyReLU(negative_slope=0.2))
+
+ self.linear1 = nn.Linear(1024, 512, bias=False)
+ self.bn4 = nn.BatchNorm1d(512)
+ self.linear2 = nn.Linear(512, 256, bias=False)
+ self.bn5 = nn.BatchNorm1d(256)
+
+ self.transform = nn.Linear(256, self.k**2)
+ init.constant_(self.transform.weight, 0)
+ init.eye_(self.transform.bias.view(self.k, self.k))
+
+ def forward(self, x):
+ B = x.size(0)
+
+ x = self.conv1(x)
+ x = self.conv2(x)
+ x = x.max(dim=-1, keepdim=False)[0]
+
+ x = self.conv3(x)
+ x = x.max(dim=-1, keepdim=False)[0]
+
+ x = F.leaky_relu(self.bn4(self.linear1(x)), negative_slope=0.2)
+ x = F.leaky_relu(self.bn5(self.linear2(x)), negative_slope=0.2)
+
+ x = self.transform(x)
+ x = x.view(B, self.k, self.k)
+
+ return x
+
+
+class encoder(nn.Module):
+ def __init__(self, channel=3, **kwargs):
+ super(encoder, self).__init__()
+ self.bn1 = nn.BatchNorm2d(64)
+ self.bn2 = nn.BatchNorm2d(64)
+ self.bn3 = nn.BatchNorm2d(128)
+ self.bn4 = nn.BatchNorm2d(256)
+ self.bn5 = nn.BatchNorm1d(1024)
+
+ self.conv1 = nn.Sequential(nn.Conv2d(channel*2, 64, kernel_size=1, bias=False),
+ self.bn1,
+ nn.LeakyReLU(negative_slope=0.2))
+ self.conv2 = nn.Sequential(nn.Conv2d(64 * 2, 64, kernel_size=1, bias=False),
+ self.bn2,
+ nn.LeakyReLU(negative_slope=0.2))
+ self.conv3 = nn.Sequential(nn.Conv2d(64 * 2, 128, kernel_size=1, bias=False),
+ self.bn3,
+ nn.LeakyReLU(negative_slope=0.2))
+ self.conv4 = nn.Sequential(nn.Conv2d(128 * 2, 256, kernel_size=1, bias=False),
+ self.bn4,
+ nn.LeakyReLU(negative_slope=0.2))
+ self.conv5 = nn.Sequential(nn.Conv1d(256 * 2, 1024, kernel_size=1, bias=False),
+ self.bn5,
+ nn.LeakyReLU(negative_slope=0.2))
+
+ def forward(self, x):
+ batch_size = x.size()[0]
+ x = get_graph_feature(x, k=20)
+ x = self.conv1(x)
+ x1 = x.max(dim=-1, keepdim=False)[0]
+
+ x = get_graph_feature(x1, k=20)
+ x = self.conv2(x)
+ x2 = x.max(dim=-1, keepdim=False)[0]
+
+ x = get_graph_feature(x2, k=20)
+ x = self.conv3(x)
+ x3 = x.max(dim=-1, keepdim=False)[0]
+
+ x = get_graph_feature(x3, k=20)
+ x = self.conv4(x)
+ x4 = x.max(dim=-1, keepdim=False)[0]
+
+ x = torch.cat((x1, x2, x3, x4), dim=1)
+
+ x = self.conv5(x)
+ x1 = F.adaptive_max_pool1d(x, 1).view(batch_size, -1)
+ # x2 = F.adaptive_avg_pool1d(x, 1).view(batch_size, -1)
+ # x = torch.cat((x1, x2), 1)
+
+ return x1
+
diff --git a/zoo/OcCo/OcCo_Torch/models/pcn_cls.py b/zoo/OcCo/OcCo_Torch/models/pcn_cls.py
new file mode 100644
index 0000000..5a442a4
--- /dev/null
+++ b/zoo/OcCo/OcCo_Torch/models/pcn_cls.py
@@ -0,0 +1,45 @@
+# Copyright (c) 2020. Hanchen Wang, hw501@cam.ac.uk
+
+import pdb, torch, torch.nn as nn, torch.nn.functional as F
+from pcn_util import PCNEncoder
+
+class get_model(nn.Module):
+ def __init__(self, num_class=40, num_channel=3, **kwargs):
+ super(get_model, self).__init__()
+ self.feat = PCNEncoder(global_feat=True, channel=num_channel)
+ self.fc1 = nn.Linear(1024, 512)
+ self.fc2 = nn.Linear(512, 256)
+ self.fc3 = nn.Linear(256, num_class)
+
+ self.dp1 = nn.Dropout(p=0.3)
+ self.bn1 = nn.BatchNorm1d(512)
+ self.dp2 = nn.Dropout(p=0.3)
+ self.bn2 = nn.BatchNorm1d(256)
+
+ def forward(self, x):
+ x = self.feat(x)
+ x = F.relu(self.bn1(self.fc1(x)))
+ x = self.dp1(x)
+
+ x = F.relu(self.bn2(self.fc2(x)))
+ x = self.dp2(x)
+
+ x = self.fc3(x)
+ x = F.log_softmax(x, dim=1)
+ return x
+
+
+class get_loss(nn.Module):
+ def __init__(self):
+ super(get_loss, self).__init__()
+
+ def forward(self, pred, target):
+ loss = F.nll_loss(pred, target)
+ return loss
+
+
+if __name__ == '__main__':
+
+ model = get_model()
+ xyz = torch.rand(12, 3, 1024)
+ x = model(xyz)
diff --git a/zoo/OcCo/OcCo_Torch/models/pcn_jigsaw.py b/zoo/OcCo/OcCo_Torch/models/pcn_jigsaw.py
new file mode 100644
index 0000000..896e89e
--- /dev/null
+++ b/zoo/OcCo/OcCo_Torch/models/pcn_jigsaw.py
@@ -0,0 +1,45 @@
+# Copyright (c) 2020. Hanchen Wang, hw501@cam.ac.uk
+
+import torch, torch.nn as nn, torch.nn.functional as F
+from pcn_util import PCNEncoder
+
+
+class get_model(nn.Module):
+ def __init__(self, num_class, num_channel=3, **kwargs):
+ super(get_model, self).__init__()
+ self.num_class = num_class
+ self.feat = PCNEncoder(global_feat=False, channel=num_channel)
+ self.conv1 = nn.Conv1d(1280, 512, 1)
+ self.conv2 = nn.Conv1d(512, 256, 1)
+ self.conv3 = nn.Conv1d(256, 128, 1)
+ self.conv4 = nn.Conv1d(128, self.num_class, 1)
+ self.bn1 = nn.BatchNorm1d(512)
+ self.bn2 = nn.BatchNorm1d(256)
+ self.bn3 = nn.BatchNorm1d(128)
+
+ def forward(self, x):
+ batch_size, _, num_points = x.size()
+ x = self.feat(x)
+ x = F.relu(self.bn1(self.conv1(x)))
+ x = F.relu(self.bn2(self.conv2(x)))
+ x = F.relu(self.bn3(self.conv3(x)))
+ x = self.conv4(x)
+ x = x.transpose(2, 1).contiguous()
+ x = F.log_softmax(x.view(-1, self.num_class), dim=-1)
+ x = x.view(batch_size, num_points, self.num_class)
+ return x
+
+
+class get_loss(nn.Module):
+ def __init__(self):
+ super(get_loss, self).__init__()
+
+ def forward(self, pred, target, trans_feat, weight):
+ loss = F.nll_loss(pred, target)
+ return loss
+
+
+if __name__ == '__main__':
+ model = get_model(num_class=13, num_channel=3)
+ xyz = torch.rand(12, 3, 2048)
+ model(xyz)
diff --git a/zoo/OcCo/OcCo_Torch/models/pcn_occo.py b/zoo/OcCo/OcCo_Torch/models/pcn_occo.py
new file mode 100644
index 0000000..95df5ed
--- /dev/null
+++ b/zoo/OcCo/OcCo_Torch/models/pcn_occo.py
@@ -0,0 +1,125 @@
+# Copyright (c) 2020. Hanchen Wang, hw501@cam.ac.uk
+# Ref: https://github.com/wentaoyuan/pcn/blob/master/models/pcn_cd.py
+# Ref: https://github.com/AnTao97/UnsupervisedPointCloudReconstruction/blob/master/model.py
+# Sanity Check: https://github.com/vinits5/learning3d/blob/master/models/pcn.py
+
+import sys, torch, itertools, numpy as np, torch.nn as nn
+from pcn_util import PCNEncoder
+sys.path.append("../chamfer_distance")
+from chamfer_distance import ChamferDistance
+
+
+class get_model(nn.Module):
+ def __init__(self, **kwargs):
+ super(get_model, self).__init__()
+
+ self.grid_size = 4
+ self.grid_scale = 0.05
+ self.num_coarse = 1024
+ self.device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
+ self.__dict__.update(kwargs) # to update args, num_coarse, grid_size, grid_scale
+
+ self.num_fine = self.grid_size ** 2 * self.num_coarse # 16384
+ self.meshgrid = [[-self.grid_scale, self.grid_scale, self.grid_size],
+ [-self.grid_scale, self.grid_scale, self.grid_size]]
+
+ self.feat = PCNEncoder(global_feat=True, channel=3)
+
+ # batch normalisation will destroy limit the expression
+ self.folding1 = nn.Sequential(
+ nn.Linear(1024, 1024),
+ # nn.BatchNorm1d(1024),
+ nn.ReLU(),
+ nn.Linear(1024, 1024),
+ # nn.BatchNorm1d(1024),
+ nn.ReLU(),
+ nn.Linear(1024, self.num_coarse * 3))
+
+ self.folding2 = nn.Sequential(
+ nn.Conv1d(1024+2+3, 512, 1),
+ # nn.BatchNorm1d(512),
+ nn.ReLU(),
+ nn.Conv1d(512, 512, 1),
+ # nn.BatchNorm1d(512),
+ nn.ReLU(),
+ nn.Conv1d(512, 3, 1))
+
+ def build_grid(self, batch_size):
+ # a simpler alternative would be: torch.meshgrid()
+ x, y = np.linspace(*self.meshgrid[0]), np.linspace(*self.meshgrid[1])
+ points = np.array(list(itertools.product(x, y)))
+ points = np.repeat(points[np.newaxis, ...], repeats=batch_size, axis=0)
+
+ return torch.tensor(points).float().to(self.device)
+
+ def tile(self, tensor, multiples):
+ # substitute for tf.tile:
+ # https://www.tensorflow.org/versions/r1.15/api_docs/python/tf/tile
+ # Ref: https://discuss.pytorch.org/t/how-to-tile-a-tensor/13853/3
+ def tile_single_axis(a, dim, n_tile):
+ init_dim = a.size()[dim]
+ repeat_idx = [1] * a.dim()
+ repeat_idx[dim] = n_tile
+ a = a.repeat(*repeat_idx)
+ order_index = torch.Tensor(
+ np.concatenate([init_dim * np.arange(n_tile) + i for i in range(init_dim)])).long()
+ return torch.index_select(a, dim, order_index.to(self.device))
+
+ for dim, n_tile in enumerate(multiples):
+ if n_tile == 1: # increase the speed effectively
+ continue
+ tensor = tile_single_axis(tensor, dim, n_tile)
+ return tensor
+
+ @staticmethod
+ def expand_dims(tensor, dim):
+ # substitute for tf.expand_dims:
+ # https://www.tensorflow.org/versions/r1.15/api_docs/python/tf/expand_dims
+ # another solution is: torch.unsqueeze(tensor, dim=dim)
+ return tensor.unsqueeze(-1).transpose(-1, dim)
+
+ def forward(self, x):
+ # use the same variable naming as:
+ # https://github.com/wentaoyuan/pcn/blob/master/models/pcn_cd.py
+ feature = self.feat(x)
+
+ coarse = self.folding1(feature)
+ coarse = coarse.view(-1, self.num_coarse, 3)
+
+ grid = self.build_grid(x.shape[0])
+ grid_feat = grid.repeat(1, self.num_coarse, 1)
+
+ point_feat = self.tile(self.expand_dims(coarse, 2), [1, 1, self.grid_size ** 2, 1])
+ point_feat = point_feat.view([-1, self.num_fine, 3])
+
+ global_feat = self.tile(self.expand_dims(feature, 1), [1, self.num_fine, 1])
+ feat = torch.cat([grid_feat, point_feat, global_feat], dim=2)
+
+ center = self.tile(self.expand_dims(coarse, 2), [1, 1, self.grid_size ** 2, 1])
+ center = center.view([-1, self.num_fine, 3])
+
+ fine = self.folding2(feat.transpose(2, 1)).transpose(2, 1) + center
+
+ return coarse, fine
+
+
+class get_loss(nn.Module):
+ def __init__(self):
+ super(get_loss, self).__init__()
+
+ @staticmethod
+ def dist_cd(pc1, pc2):
+ chamfer_dist = ChamferDistance()
+ dist1, dist2 = chamfer_dist(pc1, pc2)
+ return (torch.mean(torch.sqrt(dist1)) + torch.mean(torch.sqrt(dist2)))/2
+
+ def forward(self, coarse, fine, gt, alpha):
+ return self.dist_cd(coarse, gt) + alpha * self.dist_cd(fine, gt)
+
+
+if __name__ == '__main__':
+
+ model = get_model()
+ print(model)
+ input_pc = torch.rand(7, 3, 1024)
+ x = model(input_pc)
diff --git a/zoo/OcCo/OcCo_Torch/models/pcn_partseg.py b/zoo/OcCo/OcCo_Torch/models/pcn_partseg.py
new file mode 100644
index 0000000..8b03407
--- /dev/null
+++ b/zoo/OcCo/OcCo_Torch/models/pcn_partseg.py
@@ -0,0 +1,46 @@
+# Copyright (c) 2020. Hanchen Wang, hw501@cam.ac.uk
+
+import torch, torch.nn as nn, torch.nn.functional as F
+from pcn_util import PCNPartSegEncoder
+
+
+class get_model(nn.Module):
+ def __init__(self, part_num=50, num_channel=3, **kwargs):
+ super(get_model, self).__init__()
+ self.part_num = part_num
+ self.feat = PCNPartSegEncoder(channel=num_channel)
+
+ self.convs1 = nn.Conv1d(5264, 512, 1)
+ self.convs2 = nn.Conv1d(512, 256, 1)
+ self.convs3 = nn.Conv1d(256, 128, 1)
+ self.convs4 = nn.Conv1d(128, self.part_num, 1)
+ self.bns1 = nn.BatchNorm1d(512)
+ self.bns2 = nn.BatchNorm1d(256)
+ self.bns3 = nn.BatchNorm1d(128)
+
+ def forward(self, point_cloud, label):
+ B, _, N = point_cloud.size()
+ x = self.feat(point_cloud, label)
+ x = F.relu(self.bns1(self.convs1(x)))
+ x = F.relu(self.bns2(self.convs2(x)))
+ x = F.relu(self.bns3(self.convs3(x)))
+ x = self.convs4(x).transpose(2, 1).contiguous()
+ x = F.log_softmax(x.view(-1, self.part_num), dim=-1)
+ x = x.view(B, N, self.part_num)
+ return x
+
+
+class get_loss(nn.Module):
+ def __init__(self):
+ super(get_loss, self).__init__()
+
+ def forward(self, pred, target):
+ loss = F.nll_loss(pred, target)
+ return loss
+
+
+if __name__ == '__main__':
+ model = get_model(part_num=50, num_channel=3)
+ xyz = torch.rand(16, 3, 4096)
+ label = torch.randint(low=0, high=20, size=(16, 1, 16)).float()
+ model(xyz, label)
diff --git a/zoo/OcCo/OcCo_Torch/models/pcn_semseg.py b/zoo/OcCo/OcCo_Torch/models/pcn_semseg.py
new file mode 100644
index 0000000..7c3efe3
--- /dev/null
+++ b/zoo/OcCo/OcCo_Torch/models/pcn_semseg.py
@@ -0,0 +1,45 @@
+# Copyright (c) 2020. Hanchen Wang, hw501@cam.ac.uk
+
+import torch, torch.nn as nn, torch.nn.functional as F
+from pcn_util import PCNEncoder
+
+
+class get_model(nn.Module):
+ def __init__(self, num_class, num_channel=9, **kwargs):
+ super(get_model, self).__init__()
+ self.num_class = num_class
+ self.feat = PCNEncoder(global_feat=False, channel=num_channel)
+ self.conv1 = nn.Conv1d(1280, 512, 1)
+ self.conv2 = nn.Conv1d(512, 256, 1)
+ self.conv3 = nn.Conv1d(256, 128, 1)
+ self.conv4 = nn.Conv1d(128, self.num_class, 1)
+ self.bn1 = nn.BatchNorm1d(512)
+ self.bn2 = nn.BatchNorm1d(256)
+ self.bn3 = nn.BatchNorm1d(128)
+
+ def forward(self, x):
+ batch_size, _, num_points = x.size()
+ x = self.feat(x)
+ x = F.relu(self.bn1(self.conv1(x)))
+ x = F.relu(self.bn2(self.conv2(x)))
+ x = F.relu(self.bn3(self.conv3(x)))
+ x = self.conv4(x)
+ x = x.transpose(2, 1).contiguous()
+ x = F.log_softmax(x.view(-1, self.num_class), dim=-1)
+ x = x.view(batch_size, num_points, self.num_class)
+ return x
+
+
+class get_loss(nn.Module):
+ def __init__(self):
+ super(get_loss, self).__init__()
+
+ def forward(self, pred, target):
+ loss = F.nll_loss(pred, target)
+ return loss
+
+
+if __name__ == '__main__':
+ model = get_model(num_class=13, num_channel=3)
+ xyz = torch.rand(12, 3, 2048)
+ model(xyz)
diff --git a/zoo/OcCo/OcCo_Torch/models/pcn_util.py b/zoo/OcCo/OcCo_Torch/models/pcn_util.py
new file mode 100644
index 0000000..75a52ac
--- /dev/null
+++ b/zoo/OcCo/OcCo_Torch/models/pcn_util.py
@@ -0,0 +1,85 @@
+# Copyright (c) 2020. Hanchen Wang, hw501@cam.ac.uk
+
+import torch, torch.nn as nn, torch.nn.functional as F
+
+class PCNEncoder(nn.Module):
+ def __init__(self, global_feat=False, channel=3):
+ super(PCNEncoder, self).__init__()
+
+ self.conv1 = nn.Conv1d(channel, 128, 1)
+ # self.bn1 = nn.BatchNorm1d(128) no bn in PCN
+ self.conv2 = nn.Conv1d(128, 256, 1)
+ self.conv3 = nn.Conv1d(512, 512, 1)
+ self.conv4 = nn.Conv1d(512, 1024, 1)
+ self.global_feat = global_feat
+
+ def forward(self, x):
+ _, D, N = x.size()
+ x = F.relu(self.conv1(x))
+ pointfeat = self.conv2(x)
+
+ # 'encoder_0'
+ feat = torch.max(pointfeat, 2, keepdim=True)[0]
+ feat = feat.view(-1, 256, 1).repeat(1, 1, N)
+ x = torch.cat([pointfeat, feat], 1)
+
+ # 'encoder_1'
+ x = F.relu(self.conv3(x))
+ x = self.conv4(x)
+ x = torch.max(x, 2, keepdim=False)[0]
+
+ if self.global_feat: # used in completion and classification tasks
+ return x
+ else: # concatenate global and local features, for segmentation tasks
+ x = x.view(-1, 1024, 1).repeat(1, 1, N)
+ return torch.cat([x, pointfeat], 1)
+
+
+class PCNPartSegEncoder(nn.Module):
+ def __init__(self, channel=3):
+ super(PCNPartSegEncoder, self).__init__()
+
+ self.conv1 = nn.Conv1d(channel, 128, 1)
+ self.conv2 = nn.Conv1d(128, 256, 1)
+ self.conv3 = nn.Conv1d(512, 512, 1)
+ self.conv4 = nn.Conv1d(512, 2048, 1)
+
+ def forward(self, x, label):
+ _, D, N = x.size()
+ out1 = F.relu(self.conv1(x))
+ out2 = self.conv2(out1)
+
+ # 'encoder_0'
+ feat = torch.max(out2, 2, keepdim=True)[0]
+ feat = feat.repeat(1, 1, N)
+ out3 = torch.cat([out2, feat], 1)
+
+ # 'encoder_1'
+ out4 = F.relu(self.conv3(out3))
+ out5 = self.conv4(out4)
+
+ out_max = torch.max(out5, 2, keepdim=False)[0]
+ out_max = torch.cat([out_max, label.squeeze(1)], 1)
+
+ expand = out_max.view(-1, 2064, 1).repeat(1, 1, N) # (batch, 2064, num_point)
+ concat = torch.cat([expand, out1, out3, out4, out5], 1)
+
+ return concat
+
+
+class encoder(nn.Module):
+ def __init__(self, num_channel=3, **kwargs):
+ super(encoder, self).__init__()
+ self.feat = PCNEncoder(global_feat=True, channel=num_channel)
+
+ def forward(self, x):
+ return self.feat(x)
+
+
+if __name__ == "__main__":
+ # model = PCNEncoder()
+ model = PCNPartSegEncoder()
+ xyz = torch.rand(16, 3, 100) # batch, channel, num_point
+ label = torch.randint(low=0, high=20, size=(16, 1, 12)).float()
+ x = model(xyz, label)
+ print(x.size())
diff --git a/zoo/OcCo/OcCo_Torch/models/pointnet_cls.py b/zoo/OcCo/OcCo_Torch/models/pointnet_cls.py
new file mode 100644
index 0000000..3a99252
--- /dev/null
+++ b/zoo/OcCo/OcCo_Torch/models/pointnet_cls.py
@@ -0,0 +1,38 @@
+# Copyright (c) 2020. Hanchen Wang, hw501@cam.ac.uk
+# Ref: https://github.com/yanx27/Pointnet_Pointnet2_pytorch/blob/master/models/pointnet_cls.py
+
+import torch.nn as nn, torch.nn.functional as F
+from pointnet_util import PointNetEncoder, feature_transform_regularizer
+
+
+class get_model(nn.Module):
+ def __init__(self, num_class=40, num_channel=3, **kwargs):
+ super(get_model, self).__init__()
+ self.feat = PointNetEncoder(
+ global_feat=True, feature_transform=True, channel=num_channel)
+ self.fc1 = nn.Linear(1024, 512)
+ self.fc2 = nn.Linear(512, 256)
+ self.fc3 = nn.Linear(256, num_class)
+ self.dropout = nn.Dropout(p=0.3)
+ self.bn1 = nn.BatchNorm1d(512)
+ self.bn2 = nn.BatchNorm1d(256)
+
+ def forward(self, x):
+ x, trans, trans_feat = self.feat(x)
+ x = F.relu(self.bn1(self.fc1(x)))
+ x = F.relu(self.bn2(self.dropout(self.fc2(x))))
+ x = self.fc3(x)
+ x = F.log_softmax(x, dim=1)
+ return x, trans_feat
+
+
+class get_loss(nn.Module):
+ def __init__(self, mat_diff_loss_scale=0.001):
+ super(get_loss, self).__init__()
+ self.mat_diff_loss_scale = mat_diff_loss_scale
+
+ def forward(self, pred, target, trans_feat):
+ loss = F.nll_loss(pred, target)
+ mat_diff_loss = feature_transform_regularizer(trans_feat)
+ total_loss = loss + mat_diff_loss * self.mat_diff_loss_scale
+ return total_loss
diff --git a/zoo/OcCo/OcCo_Torch/models/pointnet_jigsaw.py b/zoo/OcCo/OcCo_Torch/models/pointnet_jigsaw.py
new file mode 100644
index 0000000..dbeeaed
--- /dev/null
+++ b/zoo/OcCo/OcCo_Torch/models/pointnet_jigsaw.py
@@ -0,0 +1,50 @@
+# Copyright (c) 2020. Hanchen Wang, hw501@cam.ac.uk
+
+import torch, torch.nn as nn, torch.nn.functional as F
+from pointnet_util import PointNetEncoder, feature_transform_regularizer
+
+
+class get_model(nn.Module):
+ def __init__(self, num_class, num_channel=3, **kwargs):
+ super(get_model, self).__init__()
+ self.num_class = num_class
+ self.feat = PointNetEncoder(global_feat=False,
+ feature_transform=True,
+ channel=num_channel)
+ self.conv1 = nn.Conv1d(1088, 512, 1)
+ self.conv2 = nn.Conv1d(512, 256, 1)
+ self.conv3 = nn.Conv1d(256, 128, 1)
+ self.conv4 = nn.Conv1d(128, self.num_class, 1)
+ self.bn1 = nn.BatchNorm1d(512)
+ self.bn2 = nn.BatchNorm1d(256)
+ self.bn3 = nn.BatchNorm1d(128)
+
+ def forward(self, x):
+ batch_size, _, num_points = x.size()
+ x, trans, trans_feat = self.feat(x)
+ x = F.relu(self.bn1(self.conv1(x)))
+ x = F.relu(self.bn2(self.conv2(x)))
+ x = F.relu(self.bn3(self.conv3(x)))
+ x = self.conv4(x)
+ x = x.transpose(2, 1).contiguous()
+ x = F.log_softmax(x.view(-1, self.num_class), dim=-1)
+ x = x.view(batch_size, num_points, self.num_class)
+ return x, trans_feat
+
+
+class get_loss(nn.Module):
+ def __init__(self, mat_diff_loss_scale=0.001):
+ super(get_loss, self).__init__()
+ self.mat_diff_loss_scale = mat_diff_loss_scale
+
+ def forward(self, pred, target, trans_feat):
+ loss = F.nll_loss(pred, target)
+ mat_diff_loss = feature_transform_regularizer(trans_feat)
+ total_loss = loss + mat_diff_loss * self.mat_diff_loss_scale
+ return total_loss
+
+
+if __name__ == '__main__':
+ model = get_model(num_class=13, num_channel=3)
+ xyz = torch.rand(12, 3, 2048)
+ model(xyz)
diff --git a/zoo/OcCo/OcCo_Torch/models/pointnet_occo.py b/zoo/OcCo/OcCo_Torch/models/pointnet_occo.py
new file mode 100644
index 0000000..0e101e4
--- /dev/null
+++ b/zoo/OcCo/OcCo_Torch/models/pointnet_occo.py
@@ -0,0 +1,118 @@
+# Copyright (c) 2020. Hanchen Wang, hw501@cam.ac.uk
+# Ref: https://github.com/krrish94/chamferdist
+# Ref: https://github.com/chrdiller/pyTorchChamferDistance
+# Ref: https://github.com/wentaoyuan/pcn/blob/master/models/pcn_cd.py
+# Ref: https://github.com/AnTao97/UnsupervisedPointCloudReconstruction/blob/master/model.py
+
+
+
+import sys, torch, itertools, numpy as np, torch.nn as nn
+from pointnet_util import PointNetEncoder
+sys.path.append("../chamfer_distance")
+from chamfer_distance import ChamferDistance
+
+class get_model(nn.Module):
+ def __init__(self, **kwargs):
+ super(get_model, self).__init__()
+
+ self.grid_size = 4
+ self.grid_scale = 0.05
+ self.num_coarse = 1024
+ self.device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
+ self.__dict__.update(kwargs) # to update args, num_coarse, grid_size, grid_scale
+
+ self.num_fine = self.grid_size ** 2 * self.num_coarse # 16384
+ self.meshgrid = [[-self.grid_scale, self.grid_scale, self.grid_size],
+ [-self.grid_scale, self.grid_scale, self.grid_size]]
+
+ self.feat = PointNetEncoder(global_feat=True, feature_transform=False, channel=3)
+ self.folding1 = nn.Sequential(
+ nn.Linear(1024, 1024),
+ nn.ReLU(),
+ nn.Linear(1024, 1024),
+ nn.ReLU(),
+ nn.Linear(1024, self.num_coarse * 3))
+
+ self.folding2 = nn.Sequential(
+ nn.Conv1d(1024+2+3, 512, 1),
+ nn.ReLU(),
+ nn.Conv1d(512, 512, 1),
+ nn.ReLU(),
+ nn.Conv1d(512, 3, 1))
+
+ def build_grid(self, batch_size):
+
+ x, y = np.linspace(*self.meshgrid[0]), np.linspace(*self.meshgrid[1])
+ points = np.array(list(itertools.product(x, y)))
+ points = np.repeat(points[np.newaxis, ...], repeats=batch_size, axis=0)
+
+ return torch.tensor(points).float().to(self.device)
+
+ def tile(self, tensor, multiples):
+ # substitute for tf.tile:
+ # https://www.tensorflow.org/versions/r1.15/api_docs/python/tf/tile
+ # Ref: https://discuss.pytorch.org/t/how-to-tile-a-tensor/13853/3
+ def tile_single_axis(a, dim, n_tile):
+ init_dim = a.size()[dim]
+ repeat_idx = [1] * a.dim()
+ repeat_idx[dim] = n_tile
+ a = a.repeat(*repeat_idx)
+ order_index = torch.Tensor(
+ np.concatenate([init_dim * np.arange(n_tile) + i for i in range(init_dim)])).long()
+ return torch.index_select(a, dim, order_index.to(self.device))
+
+ for dim, n_tile in enumerate(multiples):
+ if n_tile == 1:
+ continue
+ tensor = tile_single_axis(tensor, dim, n_tile)
+ return tensor
+
+ @staticmethod
+ def expand_dims(tensor, dim):
+ # substitute for tf.expand_dims:
+ # https://www.tensorflow.org/versions/r1.15/api_docs/python/tf/expand_dims
+ return tensor.unsqueeze(-1).transpose(-1, dim)
+
+ def forward(self, x):
+ feature, _, _ = self.feat(x)
+
+ coarse = self.folding1(feature)
+ coarse = coarse.view(-1, self.num_coarse, 3)
+
+ grid = self.build_grid(x.shape[0])
+ grid_feat = grid.repeat(1, self.num_coarse, 1)
+
+ point_feat = self.tile(self.expand_dims(coarse, 2), [1, 1, self.grid_size ** 2, 1])
+ point_feat = point_feat.view([-1, self.num_fine, 3])
+
+ global_feat = self.tile(self.expand_dims(feature, 1), [1, self.num_fine, 1])
+ feat = torch.cat([grid_feat, point_feat, global_feat], dim=2)
+
+ center = self.tile(self.expand_dims(coarse, 2), [1, 1, self.grid_size ** 2, 1])
+ center = center.view([-1, self.num_fine, 3])
+
+ fine = self.folding2(feat.transpose(2, 1)).transpose(2, 1) + center
+
+ return coarse, fine
+
+
+class get_loss(nn.Module):
+ def __init__(self):
+ super(get_loss, self).__init__()
+
+ @staticmethod
+ def dist_cd(pc1, pc2):
+ chamfer_dist = ChamferDistance()
+ dist1, dist2 = chamfer_dist(pc1, pc2)
+ return (torch.mean(torch.sqrt(dist1)) + torch.mean(torch.sqrt(dist2)))/2
+
+ def forward(self, coarse, fine, gt, alpha):
+ return self.dist_cd(coarse, gt) + alpha * self.dist_cd(fine, gt)
+
+
+if __name__ == '__main__':
+
+ model = get_model()
+ print(model)
+ input_pc = torch.rand(7, 3, 1024)
+ x = model(input_pc)
diff --git a/zoo/OcCo/OcCo_Torch/models/pointnet_partseg.py b/zoo/OcCo/OcCo_Torch/models/pointnet_partseg.py
new file mode 100644
index 0000000..db9c882
--- /dev/null
+++ b/zoo/OcCo/OcCo_Torch/models/pointnet_partseg.py
@@ -0,0 +1,47 @@
+# Copyright (c) 2020. Hanchen Wang, hw501@cam.ac.uk
+# Ref: https://github.com/yanx27/Pointnet_Pointnet2_pytorch/blob/master/models/pointnet_part_seg.py
+
+import torch.nn as nn, torch.nn.functional as F
+from pointnet_util import PointNetPartSegEncoder, feature_transform_regularizer
+
+
+class get_model(nn.Module):
+ def __init__(self, part_num=50, num_channel=3, **kwargs):
+ super(get_model, self).__init__()
+ self.part_num = part_num
+ self.feat = PointNetPartSegEncoder(feature_transform=True,
+ channel=num_channel)
+
+ self.convs1 = nn.Conv1d(4944, 256, 1)
+ self.convs2 = nn.Conv1d(256, 256, 1)
+ self.convs3 = nn.Conv1d(256, 128, 1)
+ self.convs4 = nn.Conv1d(128, part_num, 1)
+ self.bns1 = nn.BatchNorm1d(256)
+ self.bns2 = nn.BatchNorm1d(256)
+ self.bns3 = nn.BatchNorm1d(128)
+
+ def forward(self, point_cloud, label):
+ B, D, N = point_cloud.size()
+ concat, trans_feat = self.feat(point_cloud, label)
+
+ net = F.relu(self.bns1(self.convs1(concat)))
+ net = F.relu(self.bns2(self.convs2(net)))
+ net = F.relu(self.bns3(self.convs3(net)))
+ net = self.convs4(net).transpose(2, 1).contiguous()
+ net = F.log_softmax(net.view(-1, self.part_num), dim=-1)
+ net = net.view(B, N, self.part_num) # [B, N, 50]
+
+ return net, trans_feat
+
+
+class get_loss(nn.Module):
+ def __init__(self, mat_diff_loss_scale=0.001):
+ super(get_loss, self).__init__()
+ self.mat_diff_loss_scale = mat_diff_loss_scale
+
+ def forward(self, pred, target, trans_feat):
+ loss = F.nll_loss(pred, target)
+ mat_diff_loss = feature_transform_regularizer(trans_feat)
+ total_loss = loss + mat_diff_loss * self.mat_diff_loss_scale
+ return total_loss
+
diff --git a/zoo/OcCo/OcCo_Torch/models/pointnet_semseg.py b/zoo/OcCo/OcCo_Torch/models/pointnet_semseg.py
new file mode 100644
index 0000000..15e1353
--- /dev/null
+++ b/zoo/OcCo/OcCo_Torch/models/pointnet_semseg.py
@@ -0,0 +1,51 @@
+# Copyright (c) 2020. Hanchen Wang, hw501@cam.ac.uk
+
+import torch, torch.nn as nn, torch.nn.functional as F
+from pointnet_util import PointNetEncoder, feature_transform_regularizer
+
+
+class get_model(nn.Module):
+ def __init__(self, num_class=13, num_channel=9, **kwargs):
+ super(get_model, self).__init__()
+
+ self.num_class = num_class
+ self.feat = PointNetEncoder(global_feat=False,
+ feature_transform=True,
+ channel=num_channel)
+ self.conv1 = nn.Conv1d(1088, 512, 1)
+ self.conv2 = nn.Conv1d(512, 256, 1)
+ self.conv3 = nn.Conv1d(256, 128, 1)
+ self.conv4 = nn.Conv1d(128, self.num_class, 1)
+ self.bn1 = nn.BatchNorm1d(512)
+ self.bn2 = nn.BatchNorm1d(256)
+ self.bn3 = nn.BatchNorm1d(128)
+
+ def forward(self, x):
+ batch_size, _, num_points = x.size()
+ x, trans, trans_feat = self.feat(x)
+ x = F.relu(self.bn1(self.conv1(x)))
+ x = F.relu(self.bn2(self.conv2(x)))
+ x = F.relu(self.bn3(self.conv3(x)))
+ x = self.conv4(x)
+ x = x.transpose(2, 1).contiguous()
+ x = F.log_softmax(x.view(-1, self.num_class), dim=-1)
+ x = x.view(batch_size, num_points, self.num_class)
+ return x, trans_feat
+
+
+class get_loss(nn.Module):
+ def __init__(self, mat_diff_loss_scale=0.001):
+ super(get_loss, self).__init__()
+ self.mat_diff_loss_scale = mat_diff_loss_scale
+
+ def forward(self, pred, target, trans_feat):
+ loss = F.nll_loss(pred, target)
+ mat_diff_loss = feature_transform_regularizer(trans_feat)
+ total_loss = loss + mat_diff_loss * self.mat_diff_loss_scale
+ return total_loss
+
+
+if __name__ == '__main__':
+ model = get_model(13, num_channel=9)
+ xyz = torch.rand(12, 9, 2048)
+ model(xyz)
diff --git a/zoo/OcCo/OcCo_Torch/models/pointnet_util.py b/zoo/OcCo/OcCo_Torch/models/pointnet_util.py
new file mode 100644
index 0000000..223edb2
--- /dev/null
+++ b/zoo/OcCo/OcCo_Torch/models/pointnet_util.py
@@ -0,0 +1,226 @@
+# Copyright (c) 2020. Hanchen Wang, hw501@cam.ac.uk
+# Ref: https://github.com/fxia22/pointnet.pytorch/pointnet/model.py
+
+import torch, torch.nn as nn, numpy as np, torch.nn.functional as F
+from torch.autograd import Variable
+
+
+def feature_transform_regularizer(trans):
+ d = trans.size()[1]
+ I = torch.eye(d)[None, :, :]
+ if trans.is_cuda:
+ I = I.cuda()
+ loss = torch.mean(torch.norm(torch.bmm(trans, trans.transpose(2, 1) - I), dim=(1, 2)))
+ return loss
+
+
+# STN -> Spatial Transformer Network
+class STN3d(nn.Module):
+ def __init__(self, channel):
+ super(STN3d, self).__init__()
+ self.conv1 = nn.Conv1d(channel, 64, 1) # in-channel, out-channel, kernel size
+ self.conv2 = nn.Conv1d(64, 128, 1)
+ self.conv3 = nn.Conv1d(128, 1024, 1)
+ self.fc1 = nn.Linear(1024, 512)
+ self.fc2 = nn.Linear(512, 256)
+ self.fc3 = nn.Linear(256, 9)
+ self.relu = nn.ReLU()
+
+ self.bn1 = nn.BatchNorm1d(64)
+ self.bn2 = nn.BatchNorm1d(128)
+ self.bn3 = nn.BatchNorm1d(1024)
+ self.bn4 = nn.BatchNorm1d(512)
+ self.bn5 = nn.BatchNorm1d(256)
+
+ def forward(self, x):
+ B = x.size()[0]
+ x = F.relu(self.bn1(self.conv1(x)))
+ x = F.relu(self.bn2(self.conv2(x)))
+ x = F.relu(self.bn3(self.conv3(x)))
+ x = torch.max(x, 2, keepdim=False)[0] # global descriptors
+
+ x = F.relu(self.bn4(self.fc1(x)))
+ x = F.relu(self.bn5(self.fc2(x)))
+ x = self.fc3(x)
+
+ iden = Variable(torch.from_numpy(np.eye(3).flatten().astype(np.float32))).view(1, 9).repeat(B, 1)
+ if x.is_cuda:
+ iden = iden.cuda()
+ x = x + iden
+ x = x.view(-1, 3, 3)
+ return x
+
+
+class STNkd(nn.Module):
+ def __init__(self, k=64):
+ super(STNkd, self).__init__()
+ self.conv1 = nn.Conv1d(k, 64, 1)
+ self.conv2 = nn.Conv1d(64, 128, 1)
+ self.conv3 = nn.Conv1d(128, 1024, 1)
+ self.fc1 = nn.Linear(1024, 512)
+ self.fc2 = nn.Linear(512, 256)
+ self.fc3 = nn.Linear(256, k * k)
+ self.relu = nn.ReLU()
+
+ self.bn1 = nn.BatchNorm1d(64)
+ self.bn2 = nn.BatchNorm1d(128)
+ self.bn3 = nn.BatchNorm1d(1024)
+ self.bn4 = nn.BatchNorm1d(512)
+ self.bn5 = nn.BatchNorm1d(256)
+
+ self.k = k
+
+ def forward(self, x):
+ B = x.size()[0]
+ x = F.relu(self.bn1(self.conv1(x)))
+ x = F.relu(self.bn2(self.conv2(x)))
+ x = F.relu(self.bn3(self.conv3(x)))
+ x = torch.max(x, 2, keepdim=False)[0]
+
+ x = F.relu(self.bn4(self.fc1(x)))
+ x = F.relu(self.bn5(self.fc2(x)))
+ x = self.fc3(x)
+
+ iden = Variable(torch.from_numpy(np.eye(self.k).flatten().astype(np.float32))).view(
+ 1, self.k ** 2).repeat(B, 1)
+ if x.is_cuda:
+ iden = iden.cuda()
+ x = x + iden
+ x = x.view(-1, self.k, self.k)
+ return x
+
+
+class PointNetEncoder(nn.Module):
+ def __init__(self, global_feat=True, feature_transform=False,
+ channel=3, detailed=False):
+ # when input include normals, it
+ super(PointNetEncoder, self).__init__()
+ self.stn = STN3d(channel) # Batch * 3 * 3
+ self.conv1 = nn.Conv1d(channel, 64, 1)
+ self.conv2 = nn.Conv1d(64, 128, 1)
+ self.conv3 = nn.Conv1d(128, 1024, 1)
+ self.bn1 = nn.BatchNorm1d(64)
+ self.bn2 = nn.BatchNorm1d(128)
+ self.bn3 = nn.BatchNorm1d(1024)
+ self.global_feat = global_feat
+ self.feature_transform = feature_transform
+ if self.feature_transform:
+ self.fstn = STNkd(k=64)
+ self.detailed = detailed
+
+ def forward(self, x):
+
+ _, D, N = x.size() # Batch Size, Dimension of Point Features, Num of Points
+ trans = self.stn(x)
+ x = x.transpose(2, 1)
+ if D > 3:
+ # pdb.set_trace()
+ x, feature = x.split([3, D-3], dim=2)
+ x = torch.bmm(x, trans)
+ # feature = torch.bmm(feature, trans) # feature -> normals
+
+ if D > 3:
+ x = torch.cat([x, feature], dim=2)
+ x = x.transpose(2, 1)
+ out1 = self.bn1(self.conv1(x))
+ x = F.relu(out1)
+
+ if self.feature_transform:
+ trans_feat = self.fstn(x)
+ x = x.transpose(2, 1)
+ x = torch.bmm(x, trans_feat)
+ x = x.transpose(2, 1)
+ else:
+ trans_feat = None
+
+ pointfeat = x
+
+ out2 = self.bn2(self.conv2(x))
+ x = F.relu(out2)
+
+ out3 = self.bn3(self.conv3(x))
+ # x = self.bn3(self.conv3(x))
+ x = torch.max(out3, 2, keepdim=False)[0]
+ if self.global_feat:
+ return x, trans, trans_feat
+ elif self.detailed:
+ return out1, out2, out3, x
+ else: # concatenate global and local feature together
+ x = x.view(-1, 1024, 1).repeat(1, 1, N)
+ return torch.cat([x, pointfeat], 1), trans, trans_feat
+
+
+class PointNetPartSegEncoder(nn.Module):
+ def __init__(self, feature_transform=True, channel=3):
+ super(PointNetPartSegEncoder, self).__init__()
+ self.stn = STN3d(channel)
+ self.conv1 = nn.Conv1d(channel, 64, 1)
+ self.conv2 = nn.Conv1d(64, 128, 1)
+ self.conv3 = nn.Conv1d(128, 128, 1)
+ self.conv4 = nn.Conv1d(128, 512, 1)
+ self.conv5 = nn.Conv1d(512, 2048, 1)
+
+ self.bn1 = nn.BatchNorm1d(64)
+ self.bn2 = nn.BatchNorm1d(128)
+ self.bn3 = nn.BatchNorm1d(128)
+ self.bn4 = nn.BatchNorm1d(512)
+ self.bn5 = nn.BatchNorm1d(2048)
+
+ self.feature_transform = feature_transform
+ if self.feature_transform:
+ self.fstn = STNkd(k=128)
+
+ def forward(self, point_cloud, label):
+ B, D, N = point_cloud.size()
+
+ trans = self.stn(point_cloud)
+ point_cloud = point_cloud.transpose(2, 1)
+ if D > 3:
+ point_cloud, feature = point_cloud.split(3, dim=2)
+ point_cloud = torch.bmm(point_cloud, trans)
+ if D > 3:
+ point_cloud = torch.cat([point_cloud, feature], dim=2)
+ point_cloud = point_cloud.transpose(2, 1)
+
+ out1 = F.relu(self.bn1(self.conv1(point_cloud)))
+ out2 = F.relu(self.bn2(self.conv2(out1)))
+ out3 = F.relu(self.bn3(self.conv3(out2)))
+
+ if self.feature_transform:
+ trans_feat = self.fstn(out3)
+ net_transformed = torch.bmm(out3.transpose(2, 1), trans_feat)
+ out3 = net_transformed.transpose(2, 1)
+
+ out4 = F.relu(self.bn4(self.conv4(out3)))
+ out5 = self.bn5(self.conv5(out4))
+
+ out_max = torch.max(out5, 2, keepdim=False)[0]
+ out_max = torch.cat([out_max, label.squeeze(1)], 1)
+ expand = out_max.view(-1, 2048 + 16, 1).repeat(1, 1, N)
+ concat = torch.cat([expand, out1, out2, out3, out4, out5], 1)
+
+ if self.feature_transform:
+ return concat, trans_feat
+ return concat
+
+
+class encoder(nn.Module):
+ def __init__(self, num_channel=3, **kwargs):
+ super(encoder, self).__init__()
+ self.feat = PointNetEncoder(global_feat=True, channel=num_channel)
+
+ def forward(self, x):
+ feat, _, _ = self.feat(x)
+ return feat
+
+
+class detailed_encoder(nn.Module):
+ def __init__(self, num_channel=3, **kwargs):
+ super(detailed_encoder, self).__init__()
+ self.feat = PointNetEncoder(global_feat=False,
+ channel=num_channel,
+ detailed=True)
+
+ def forward(self, x):
+ out1, out2, out3, x = self.feat(x)
+ return out1, out2, out3, x
\ No newline at end of file
diff --git a/zoo/OcCo/OcCo_Torch/readme.md b/zoo/OcCo/OcCo_Torch/readme.md
new file mode 100644
index 0000000..a09f228
--- /dev/null
+++ b/zoo/OcCo/OcCo_Torch/readme.md
@@ -0,0 +1,2 @@
+## OcCo in PyTorch
+
diff --git a/zoo/OcCo/OcCo_Torch/test.py b/zoo/OcCo/OcCo_Torch/test.py
new file mode 100644
index 0000000..116f013
--- /dev/null
+++ b/zoo/OcCo/OcCo_Torch/test.py
@@ -0,0 +1,250 @@
+import argparse
+import os
+import torch
+import sys
+import importlib
+import numpy as np
+from tqdm import tqdm
+from utils.ShapeNetDataLoader import ShapeNetC
+import re
+from collections import defaultdict
+from torch.autograd import Variable
+
+
+BASE_DIR = os.path.dirname(os.path.abspath(__file__))
+ROOT_DIR = BASE_DIR
+sys.path.append(os.path.join(ROOT_DIR, 'models'))
+
+seg_classes = {
+ 'Earphone': [16, 17, 18], 'Motorbike': [30, 31, 32, 33, 34, 35], 'Rocket': [41, 42, 43],
+ 'Car': [8, 9, 10, 11], 'Laptop': [28, 29], 'Cap': [6, 7], 'Skateboard': [44, 45, 46], 'Mug': [36, 37],
+ 'Guitar': [19, 20, 21], 'Bag': [4, 5], 'Lamp': [24, 25, 26, 27], 'Table': [47, 48, 49],
+ 'Airplane': [0, 1, 2, 3], 'Pistol': [38, 39, 40], 'Chair': [12, 13, 14, 15], 'Knife': [22, 23]
+}
+seg_label_to_cat = {} # {0:Airplane, 1:Airplane, ...49:Table}
+for cat in seg_classes.keys():
+ for label in seg_classes[cat]:
+ seg_label_to_cat[label] = cat
+
+classes_str = ['aero','bag','cap','car','chair','ear','guitar','knife','lamp','lapt','moto','mug','Pistol','rock','stake','table']
+
+
+def inplace_relu(m):
+ classname = m.__class__.__name__
+ if classname.find('ReLU') != -1:
+ m.inplace=True
+
+def to_categorical(y, num_classes):
+ """ 1-hot encodes a tensor """
+ new_y = torch.eye(num_classes)[y.cpu().data.numpy(),]
+ if (y.is_cuda):
+ return new_y.cuda()
+ return new_y
+
+
+def compute_overall_iou(pred, target, num_classes):
+ shape_ious = []
+ pred = pred.max(dim=2)[1] # (batch_size, num_points) the pred_class_idx of each point in each sample
+ pred_np = pred.cpu().data.numpy()
+
+ target_np = target.cpu().data.numpy()
+ for shape_idx in range(pred.size(0)): # sample_idx
+ part_ious = []
+ for part in range(num_classes): # class_idx! no matter which category, only consider all part_classes of all categories, check all 50 classes
+ # for target, each point has a class no matter which category owns this point! also 50 classes!!!
+ # only return 1 when both belongs to this class, which means correct:
+ I = np.sum(np.logical_and(pred_np[shape_idx] == part, target_np[shape_idx] == part))
+ # always return 1 when either is belongs to this class:
+ U = np.sum(np.logical_or(pred_np[shape_idx] == part, target_np[shape_idx] == part))
+
+ F = np.sum(target_np[shape_idx] == part)
+
+ if F != 0:
+ iou = I / float(U) # iou across all points for this class
+ part_ious.append(iou) # append the iou of this class
+ shape_ious.append(np.mean(part_ious)) # each time append an average iou across all classes of this sample (sample_level!)
+ return shape_ious # [batch_size]
+
+
+def parse_args():
+ parser = argparse.ArgumentParser('Model')
+ parser.add_argument('--model', type=str, default='pt', help='model name')
+ parser.add_argument('--gpu', type=str, default='0', help='specify GPU devices')
+ parser.add_argument('--ckpts', type=str, help='ckpts')
+ parser.add_argument('--dropout', type=float, default=0.5, help='dropout rate in FCs [default: 0.5]')
+ parser.add_argument('--emb_dims', type=int, default=1024, help='embedding dimensions [default: 1024]')
+ parser.add_argument('--k', type=int, default=20, help='num of nearest neighbors to use [default: 20]')
+ return parser.parse_args()
+
+
+def main(args):
+ # def log_string(str):
+ # logger.info(str)
+ # print(str)
+
+ '''HYPER PARAMETER'''
+ os.environ["CUDA_VISIBLE_DEVICES"] = args.gpu
+
+ '''CREATE DIR'''
+ # timestr = str(datetime.datetime.now().strftime('%Y-%m-%d_%H-%M'))
+ # exp_dir = Path('./log/')
+ # exp_dir.mkdir(exist_ok=True)
+ # exp_dir = exp_dir.joinpath('part_seg')
+ # exp_dir.mkdir(exist_ok=True)
+ # if args.log_dir is None:
+ # exp_dir = exp_dir.joinpath(timestr)
+ # else:
+ # exp_dir = exp_dir.joinpath(args.log_dir)
+ # exp_dir.mkdir(exist_ok=True)
+ # checkpoints_dir = exp_dir.joinpath('checkpoints/')
+ # checkpoints_dir.mkdir(exist_ok=True)
+ # log_dir = exp_dir.joinpath('logs/')
+ # log_dir.mkdir(exist_ok=True)
+
+ '''LOG'''
+ # args = parse_args()
+ # logger = logging.getLogger("Model")
+ # logger.setLevel(logging.INFO)
+ # formatter = logging.Formatter('%(asctime)s - %(name)s - %(levelname)s - %(message)s')
+ # file_handler = logging.FileHandler('%s/%s.txt' % (log_dir, args.model))
+ # file_handler.setLevel(logging.INFO)
+ # file_handler.setFormatter(formatter)
+ # logger.addHandler(file_handler)
+ # log_string('PARAMETER ...')
+ # log_string(args)
+
+ # root = args.root
+
+ # TRAIN_DATASET = PartNormalDataset(root=root, npoints=args.npoint, split='trainval', normal_channel=args.normal)
+ # TRAIN_DATASET = ShapeNetPart(partition='trainval', num_points=2048, class_choice=None)
+ # trainDataLoader = torch.utils.data.DataLoader(TRAIN_DATASET, batch_size=args.batch_size, shuffle=True, num_workers=10, pin_memory=True, drop_last=True)
+
+ # TEST_DATASET = PartNormalDataset(root=root, npoints=args.npoint, split='test', normal_channel=args.normal)
+ # TEST_DATASET = ShapeNetPart(partition='test', num_points=2048, class_choice=None)
+ TEST_DATASET = ShapeNetC(partition='shapenet-c', sub='clean', class_choice=None)
+ testDataLoader = torch.utils.data.DataLoader(TEST_DATASET, batch_size=32, shuffle=False, num_workers=10, pin_memory=True, drop_last=False)
+
+ # log_string("The number of training data is: %d" % len(TRAIN_DATASET))
+ print("The number of test data is: %d" % len(TEST_DATASET))
+
+ num_classes = 16
+ num_part = 50
+
+ '''MODEL LOADING'''
+ MODEL = importlib.import_module(args.model)
+ # shutil.copy('models/%s.py' % args.model, str(exp_dir))
+ # shutil.copy('models/pointnet2_utils.py', str(exp_dir))
+
+ channel_num = 3
+ MODEL = importlib.import_module(args.model)
+ device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
+ classifier = MODEL.get_model(part_num=num_part, num_channel=channel_num, args=args).cuda().to(device)
+ print('# generator parameters:', sum(param.numel() for param in classifier.parameters()))
+
+ if args.ckpts is not None:
+ from collections import OrderedDict
+ model_dict = OrderedDict()
+ state_dict = torch.load(args.ckpts)["model_state_dict"]
+ pattern = re.compile('module.')
+ for k,v in state_dict.items():
+ if re.search("module", k):
+ model_dict[re.sub(pattern, '', k)] = v
+ else:
+ model_dict = state_dict
+ classifier.load_state_dict(model_dict)
+ # classifier.load_state_dict(torch.load(args.ckpts))
+
+## we use adamw and cosine scheduler
+ # def add_weight_decay(model, weight_decay=1e-5, skip_list=()):
+ # decay = []
+ # no_decay = []
+ # for name, param in model.named_parameters():
+ # if not param.requires_grad:
+ # continue # frozen weights
+ # if len(param.shape) == 1 or name.endswith(".bias") or 'token' in name or name in skip_list:
+ # # print(name)
+ # no_decay.append(param)
+ # else:
+ # decay.append(param)
+ # return [
+ # {'params': no_decay, 'weight_decay': 0.},
+ # {'params': decay, 'weight_decay': weight_decay}]
+
+ # param_groups = add_weight_decay(classifier, weight_decay=0.05)
+ # optimizer = optim.AdamW(param_groups, lr= args.learning_rate, weight_decay=0.05 )
+
+ # scheduler = CosineLRScheduler(
+ # optimizer,
+ # t_initial=args.epoch,
+ # t_mul=1,
+ # lr_min=1e-6,
+ # decay_rate=0.1,
+ # warmup_lr_init=1e-6,
+ # warmup_t=args.warmup_epoch,
+ # cycle_limit=1,
+ # t_in_epochs=True
+ # )
+
+ # best_acc = 0
+ # global_epoch = 0
+ # best_class_avg_iou = 0
+ # best_inctance_avg_iou = 0
+
+ # classifier.zero_grad()
+
+ metrics = defaultdict(lambda: list())
+ hist_acc = []
+ shape_ious = []
+ total_per_cat_iou = np.zeros((16)).astype(np.float32)
+ total_per_cat_seen = np.zeros((16)).astype(np.int32)
+
+ for batch_id, (points, label, target) in tqdm(enumerate(testDataLoader), total=len(testDataLoader), smoothing=0.9):
+ batch_size, num_point, _ = points.size()
+ points, label, target = Variable(points.float()), Variable(label.long()), Variable(target.long())
+ points = points.transpose(2, 1)
+ points, label, target = points.cuda(non_blocking=True), label.squeeze().cuda(non_blocking=True), target.cuda(non_blocking=True)
+
+ with torch.no_grad():
+ if args.model == 'pointnet_partseg':
+ seg_pred, _ = classifier(points, to_categorical(label, num_classes))
+ else:
+ seg_pred = classifier(points, to_categorical(label, num_classes)) # b,n,50
+
+ # instance iou without considering the class average at each batch_size:
+ batch_shapeious = compute_overall_iou(seg_pred, target, num_part) # [b]
+ shape_ious += batch_shapeious # iou +=, equals to .append
+
+ # per category iou at each batch_size:
+ for shape_idx in range(seg_pred.size(0)): # sample_idx
+ cur_gt_label = label[shape_idx] # label[sample_idx]
+ total_per_cat_iou[cur_gt_label] += batch_shapeious[shape_idx]
+ total_per_cat_seen[cur_gt_label] += 1
+
+ # accuracy:
+ seg_pred = seg_pred.contiguous().view(-1, num_part)
+ target = target.view(-1, 1)[:, 0]
+ pred_choice = seg_pred.data.max(1)[1]
+ correct = pred_choice.eq(target.data).cpu().sum()
+ metrics['accuracy'].append(correct.item() / (batch_size * num_point))
+
+ hist_acc += metrics['accuracy']
+ metrics['accuracy'] = np.mean(hist_acc)
+ metrics['shape_avg_iou'] = np.mean(shape_ious)
+ for cat_idx in range(16):
+ if total_per_cat_seen[cat_idx] > 0:
+ total_per_cat_iou[cat_idx] = total_per_cat_iou[cat_idx] / total_per_cat_seen[cat_idx]
+
+ # First we need to calculate the iou of each class and the avg class iou:
+ class_iou = 0
+ for cat_idx in range(16):
+ class_iou += total_per_cat_iou[cat_idx]
+ print(classes_str[cat_idx] + ' iou: ' + str(total_per_cat_iou[cat_idx])) # print the iou of each class
+ avg_class_iou = class_iou / 16
+ outstr = 'Test :: test acc: %f test class mIOU: %f, test instance mIOU: %f' % (metrics['accuracy'], avg_class_iou, metrics['shape_avg_iou'])
+ print(outstr)
+
+
+
+if __name__ == '__main__':
+ args = parse_args()
+ main(args)
\ No newline at end of file
diff --git a/zoo/OcCo/OcCo_Torch/test.sh b/zoo/OcCo/OcCo_Torch/test.sh
new file mode 100644
index 0000000..36f994e
--- /dev/null
+++ b/zoo/OcCo/OcCo_Torch/test.sh
@@ -0,0 +1,4 @@
+python test_partseg.py \
+ --model pointnet_partseg \
+ --ckpts /mnt/lustre/ldkong/models/OcCo/OcCo_Torch/log/partseg/occo_pointnet/checkpoints/ep245_83.2.pth \
+ --gpu 2
\ No newline at end of file
diff --git a/zoo/OcCo/OcCo_Torch/test_partseg.py b/zoo/OcCo/OcCo_Torch/test_partseg.py
new file mode 100644
index 0000000..67657e6
--- /dev/null
+++ b/zoo/OcCo/OcCo_Torch/test_partseg.py
@@ -0,0 +1,194 @@
+"""
+Author: Benny
+Date: Nov 2019
+"""
+import argparse
+import os
+from utils.ShapeNetDataLoader import ShapeNetC
+import torch
+import logging
+import sys
+import importlib
+from tqdm import tqdm
+import numpy as np
+import re
+
+BASE_DIR = os.path.dirname(os.path.abspath(__file__))
+ROOT_DIR = BASE_DIR
+sys.path.append(os.path.join(ROOT_DIR, 'models'))
+
+seg_classes = {
+ 'Earphone': [16, 17, 18], 'Motorbike': [30, 31, 32, 33, 34, 35], 'Rocket': [41, 42, 43],
+ 'Car': [8, 9, 10, 11], 'Laptop': [28, 29], 'Cap': [6, 7], 'Skateboard': [44, 45, 46], 'Mug': [36, 37],
+ 'Guitar': [19, 20, 21], 'Bag': [4, 5], 'Lamp': [24, 25, 26, 27], 'Table': [47, 48, 49],
+ 'Airplane': [0, 1, 2, 3], 'Pistol': [38, 39, 40], 'Chair': [12, 13, 14, 15], 'Knife': [22, 23]
+}
+
+seg_label_to_cat = {} # {0:Airplane, 1:Airplane, ...49:Table}
+for cat in seg_classes.keys():
+ for label in seg_classes[cat]:
+ seg_label_to_cat[label] = cat
+
+
+def to_categorical(y, num_classes):
+ """ 1-hot encodes a tensor """
+ new_y = torch.eye(num_classes)[y.cpu().data.numpy(),]
+ if (y.is_cuda):
+ return new_y.cuda()
+ return new_y
+
+
+def parse_args():
+ '''PARAMETERS'''
+ parser = argparse.ArgumentParser('PointNet')
+ parser.add_argument('--normal', action='store_true', default=False, help='use normals')
+ parser.add_argument('--num_votes', type=int, default=3, help='aggregate segmentation scores with voting')
+ parser.add_argument('--model', type=str, default='pt', help='model name')
+ parser.add_argument('--gpu', type=str, default='0', help='specify GPU devices')
+ parser.add_argument('--ckpts', type=str, help='ckpts')
+ parser.add_argument('--dropout', type=float, default=0.5, help='dropout rate in FCs [default: 0.5]')
+ parser.add_argument('--emb_dims', type=int, default=1024, help='embedding dimensions [default: 1024]')
+ parser.add_argument('--k', type=int, default=20, help='num of nearest neighbors to use [default: 20]')
+ return parser.parse_args()
+
+
+def main(args):
+ # def log_string(str):
+ # logger.info(str)
+ # print(str)
+
+ '''HYPER PARAMETER'''
+ os.environ["CUDA_VISIBLE_DEVICES"] = args.gpu
+ # experiment_dir = 'log/part_seg/' + args.log_dir
+
+ '''LOG'''
+ # args = parse_args()
+ # logger = logging.getLogger("Model")
+ # logger.setLevel(logging.INFO)
+ # formatter = logging.Formatter('%(asctime)s - %(name)s - %(levelname)s - %(message)s')
+ # file_handler = logging.FileHandler('%s/eval.txt' % experiment_dir)
+ # file_handler.setLevel(logging.INFO)
+ # file_handler.setFormatter(formatter)
+ # logger.addHandler(file_handler)
+ # log_string('PARAMETER ...')
+ # log_string(args)
+
+ # root = 'data/shapenetcore_partanno_segmentation_benchmark_v0_normal/'
+
+ # TEST_DATASET = PartNormalDataset(root=root, npoints=args.num_point, split='test', normal_channel=args.normal)
+ # testDataLoader = torch.utils.data.DataLoader(TEST_DATASET, batch_size=args.batch_size, shuffle=False, num_workers=4)
+
+ TEST_DATASET = ShapeNetC(partition='shapenet-c', sub='dropout_global_4', class_choice=None)
+ testDataLoader = torch.utils.data.DataLoader(TEST_DATASET, batch_size=16, shuffle=False, num_workers=10, pin_memory=True, drop_last=False)
+
+ print("The number of test data is: %d" % len(TEST_DATASET))
+ num_classes = 16
+ num_part = 50
+
+ '''MODEL LOADING'''
+ MODEL = importlib.import_module(args.model)
+ # shutil.copy('models/%s.py' % args.model, str(exp_dir))
+ # shutil.copy('models/pointnet2_utils.py', str(exp_dir))
+
+ channel_num = 3
+ MODEL = importlib.import_module(args.model)
+ device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
+ classifier = MODEL.get_model(part_num=num_part, num_channel=channel_num, args=args).cuda().to(device)
+ print('# generator parameters:', sum(param.numel() for param in classifier.parameters()))
+
+ if args.ckpts is not None:
+ from collections import OrderedDict
+ model_dict = OrderedDict()
+ state_dict = torch.load(args.ckpts)["model_state_dict"]
+ pattern = re.compile('module.')
+ for k,v in state_dict.items():
+ if re.search("module", k):
+ model_dict[re.sub(pattern, '', k)] = v
+ else:
+ model_dict = state_dict
+ classifier.load_state_dict(model_dict)
+
+ with torch.no_grad():
+ test_metrics = {}
+ total_correct = 0
+ total_seen = 0
+ total_seen_class = [0 for _ in range(num_part)]
+ total_correct_class = [0 for _ in range(num_part)]
+ shape_ious = {cat: [] for cat in seg_classes.keys()}
+ seg_label_to_cat = {} # {0:Airplane, 1:Airplane, ...49:Table}
+
+ for cat in seg_classes.keys():
+ for label in seg_classes[cat]:
+ seg_label_to_cat[label] = cat
+
+ classifier = classifier.eval()
+ for batch_id, (points, label, target) in tqdm(enumerate(testDataLoader), total=len(testDataLoader), smoothing=0.9):
+ batchsize, num_point, _ = points.size()
+ cur_batch_size, NUM_POINT, _ = points.size()
+ points, label, target = points.float().cuda(), label.long().cuda(), target.long().cuda()
+ points = points.transpose(2, 1)
+ vote_pool = torch.zeros(target.size()[0], target.size()[1], num_part).cuda()
+
+ for _ in range(args.num_votes):
+ if args.model == 'pointnet_partseg':
+ seg_pred, _ = classifier(points, to_categorical(label, num_classes))
+ else:
+ seg_pred = classifier(points, to_categorical(label, num_classes)) # b,n,50
+ vote_pool += seg_pred
+
+ seg_pred = vote_pool / args.num_votes
+ cur_pred_val = seg_pred.cpu().data.numpy()
+ cur_pred_val_logits = cur_pred_val
+ cur_pred_val = np.zeros((cur_batch_size, NUM_POINT)).astype(np.int32)
+ target = target.cpu().data.numpy()
+
+ for i in range(cur_batch_size):
+ cat = seg_label_to_cat[target[i, 0]]
+ logits = cur_pred_val_logits[i, :, :]
+ cur_pred_val[i, :] = np.argmax(logits[:, seg_classes[cat]], 1) + seg_classes[cat][0]
+
+ correct = np.sum(cur_pred_val == target)
+ total_correct += correct
+ total_seen += (cur_batch_size * NUM_POINT)
+
+ for l in range(num_part):
+ total_seen_class[l] += np.sum(target == l)
+ total_correct_class[l] += (np.sum((cur_pred_val == l) & (target == l)))
+
+ for i in range(cur_batch_size):
+ segp = cur_pred_val[i, :]
+ segl = target[i, :]
+ cat = seg_label_to_cat[segl[0]]
+ part_ious = [0.0 for _ in range(len(seg_classes[cat]))]
+ for l in seg_classes[cat]:
+ if (np.sum(segl == l) == 0) and (
+ np.sum(segp == l) == 0): # part is not present, no prediction as well
+ part_ious[l - seg_classes[cat][0]] = 1.0
+ else:
+ part_ious[l - seg_classes[cat][0]] = np.sum((segl == l) & (segp == l)) / float(
+ np.sum((segl == l) | (segp == l)))
+ shape_ious[cat].append(np.mean(part_ious))
+
+ all_shape_ious = []
+ for cat in shape_ious.keys():
+ for iou in shape_ious[cat]:
+ all_shape_ious.append(iou)
+ shape_ious[cat] = np.mean(shape_ious[cat])
+ mean_shape_ious = np.mean(list(shape_ious.values()))
+ test_metrics['accuracy'] = total_correct / float(total_seen)
+ test_metrics['class_avg_accuracy'] = np.mean(
+ np.array(total_correct_class) / np.array(total_seen_class, dtype=np.float))
+ for cat in sorted(shape_ious.keys()):
+ print('eval mIoU of %s %f' % (cat + ' ' * (14 - len(cat)), shape_ious[cat]))
+ test_metrics['class_avg_iou'] = mean_shape_ious
+ test_metrics['inctance_avg_iou'] = np.mean(all_shape_ious)
+
+ print('Accuracy is: %.5f' % test_metrics['accuracy'])
+ print('Class avg accuracy is: %.5f' % test_metrics['class_avg_accuracy'])
+ print('Class avg mIOU is: %.5f' % test_metrics['class_avg_iou'])
+ print('Inctance avg mIOU is: %.5f' % test_metrics['inctance_avg_iou'])
+
+
+if __name__ == '__main__':
+ args = parse_args()
+ main(args)
diff --git a/zoo/OcCo/OcCo_Torch/train.py b/zoo/OcCo/OcCo_Torch/train.py
new file mode 100644
index 0000000..2548b6d
--- /dev/null
+++ b/zoo/OcCo/OcCo_Torch/train.py
@@ -0,0 +1,303 @@
+# Copyright (c) 2020. Hanchen Wang, hw501@cam.ac.uk
+# Ref: https://github.com/yanx27/Pointnet_Pointnet2_pytorch/blob/master/train_partseg.py
+
+import os, sys, torch, shutil, importlib, argparse, numpy as np
+sys.path.append('utils')
+sys.path.append('models')
+from utils.PC_Augmentation import random_scale_point_cloud, random_shift_point_cloud
+from utils.Torch_Utility import copy_parameters, weights_init, bn_momentum_adjust
+from torch.optim.lr_scheduler import StepLR, CosineAnnealingLR
+from torch.utils.tensorboard import SummaryWriter
+from utils.ShapeNetDataLoader import ShapeNetPart
+from torch.utils.data import DataLoader
+from utils.TrainLogger import TrainLogger
+from tqdm import tqdm
+
+
+seg_classes = {
+ 'Earphone': [16, 17, 18],
+ 'Motorbike': [30, 31, 32, 33, 34, 35],
+ 'Rocket': [41, 42, 43],
+ 'Car': [8, 9, 10, 11],
+ 'Laptop': [28, 29],
+ 'Cap': [6, 7],
+ 'Skateboard': [44, 45, 46],
+ 'Mug': [36, 37],
+ 'Guitar': [19, 20, 21],
+ 'Bag': [4, 5],
+ 'Lamp': [24, 25, 26, 27],
+ 'Table': [47, 48, 49],
+ 'Airplane': [0, 1, 2, 3],
+ 'Pistol': [38, 39, 40],
+ 'Chair': [12, 13, 14, 15],
+ 'Knife': [22, 23]
+}
+seg_label_to_cat = {} # {0:Airplane, 1:Airplane, ..., 49:Table}
+for cat in seg_classes.keys():
+ for label in seg_classes[cat]:
+ seg_label_to_cat[label] = cat
+
+
+def parse_args():
+ parser = argparse.ArgumentParser('Model')
+ parser.add_argument('--log_dir', type=str, help='log folder [default: ]')
+ parser.add_argument('--gpu', type=str, default='0', help='GPU [default: 0]')
+ parser.add_argument('--mode', type=str, default='train', help='train or test')
+ parser.add_argument('--epoch', default=250, type=int, help=' epochs [default: 250]')
+ parser.add_argument('--batch_size', type=int, default=32, help='batch size [default: 16]')
+ parser.add_argument('--lr', default=0.001, type=float, help='learning rate [default: 0.001]')
+ parser.add_argument('--momentum', type=float, default=0.9, help='SGD momentum [default: 0.9]')
+ parser.add_argument('--restore_path', type=str, help='path to pretrained weights [default: ]')
+ parser.add_argument('--lr_decay', type=float, default=0.5, help='lr decay rate [default: 0.5]')
+ parser.add_argument('--num_point', type=int, default=2048, help='point number [default: 2048]')
+ parser.add_argument('--restore', action='store_true', help='using pre-trained [default: False]')
+ parser.add_argument('--use_sgd', action='store_true', help='use SGD optimiser [default: False]')
+ parser.add_argument('--data_aug', action='store_true', help='data augmentation [default: False]')
+ parser.add_argument('--scheduler', default='step', help='learning rate scheduler [default: step]')
+ parser.add_argument('--model', default='pointnet_partseg', help='model [default: pointnet_partseg]')
+ parser.add_argument('--dropout', type=float, default=0.5, help='dropout rate in FCs [default: 0.5]')
+ parser.add_argument('--bn_decay', action='store_true', help='use BN nomentum decay [default: False]')
+ parser.add_argument('--xavier_init', action='store_true', help='Xavier weight init [default: False]')
+ parser.add_argument('--emb_dims', type=int, default=1024, help='embedding dimensions [default: 1024]')
+ parser.add_argument('--k', type=int, default=20, help='num of nearest neighbors to use [default: 20]')
+ parser.add_argument('--normal', action='store_true', default=False, help='use normal [default: False]')
+ parser.add_argument('--step_size', type=int, default=20, help='lr decay step [default: every 20 epochs]')
+ parser.add_argument('--num_votes', type=int, default=3, help='aggregate test predictions via vote [default: 3]')
+
+ return parser.parse_args()
+
+
+def main(args):
+ os.environ["CUDA_DEVICE_ORDER"] = "PCI_BUS_ID"
+ os.environ["CUDA_VISIBLE_DEVICES"] = args.gpu
+
+ def to_categorical(y, num_class):
+ """ 1-hot encodes a tensor """
+ new_y = torch.eye(num_class)[y.cpu().data.numpy(), ]
+ if y.is_cuda:
+ return new_y.cuda()
+ return new_y
+
+ ''' === Set up Loggers and Load Data === '''
+ MyLogger = TrainLogger(args, name=args.model.upper(), subfold='partseg', filename=args.mode + '_log', cls2name=seg_label_to_cat)
+ writer = SummaryWriter(os.path.join(MyLogger.experiment_dir, 'runs'))
+ # root = 'data/shapenetcore_partanno_segmentation_benchmark_v0_normal/'
+
+ # TRAIN_DATASET = PartNormalDataset(root=root, num_point=args.num_point, split='trainval', use_normal=args.normal)
+ TRAIN_DATASET = ShapeNetPart(partition='trainval', num_points=2048, class_choice=None)
+ trainDataLoader = torch.utils.data.DataLoader(TRAIN_DATASET, batch_size=args.batch_size, shuffle=True, num_workers=10, pin_memory=True, drop_last=True)
+
+ # TEST_DATASET = PartNormalDataset(root=root, num_point=args.num_point, split='test', use_normal=args.normal)
+ TEST_DATASET = ShapeNetPart(partition='test', num_points=2048, class_choice=None)
+ testDataLoader = torch.utils.data.DataLoader(TEST_DATASET, batch_size=16, shuffle=False, num_workers=10, pin_memory=True, drop_last=False)
+
+ # trainDataLoader = DataLoader(TRAIN_DATASET, batch_size=args.batch_size, shuffle=True, num_workers=4)
+ # testDataLoader = DataLoader(TEST_DATASET, batch_size=args.batch_size, shuffle=False, num_workers=4)
+
+ num_classes, num_part = 16, 50
+
+ ''' === Load Model and Backup Scripts === '''
+ channel_num = 6 if args.normal else 3
+ MODEL = importlib.import_module(args.model)
+ shutil.copy(os.path.abspath(__file__), MyLogger.log_dir)
+ shutil.copy('./models/%s.py' % args.model, MyLogger.log_dir)
+ device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
+ classifier = MODEL.get_model(part_num=num_part, num_channel=channel_num, args=args).cuda().to(device)
+ criterion = MODEL.get_loss().to(device)
+ classifier = torch.nn.DataParallel(classifier)
+
+ if args.restore:
+ checkpoint = torch.load(args.restore_path)
+ classifier = copy_parameters(classifier, checkpoint, verbose=True)
+ MyLogger.logger.info('Use pre-trained weights from %s' % args.restore_path)
+ else:
+ MyLogger.logger.info('No pre-trained weights, start training from scratch...')
+ if args.xavier_init:
+ classifier = classifier.apply(weights_init)
+ MyLogger.logger.info("Using Xavier weight initialisation")
+
+ if args.mode == 'test':
+ MyLogger.logger.info('\n\n')
+ MyLogger.logger.info('=' * 33)
+ MyLogger.logger.info('load parrameters from %s' % args.restore_path)
+ with torch.no_grad():
+ test_metrics = {}
+ total_correct, total_seen = 0, 0
+ total_seen_class = [0 for _ in range(num_part)]
+ total_correct_class = [0 for _ in range(num_part)]
+ shape_ious = {cat: [] for cat in seg_classes.keys()} # {shape: []}
+
+ for points, label, target in tqdm(testDataLoader, total=len(testDataLoader), smoothing=0.9):
+ classifier.eval()
+ cur_batch_size, num_point, _ = points.size()
+ vote_pool = torch.zeros(cur_batch_size, num_point, num_part).cuda() # (batch, num point, num part)
+ points, label, target = points.transpose(2, 1).float().cuda(), label.long().cuda(), target.numpy()
+
+ ''' === generate predictions from raw output (multiple via voting) === '''
+ for _ in range(args.num_votes):
+ if args.model == 'pointnet_partseg':
+ seg_pred, _ = classifier(points, to_categorical(label, num_classes))
+ else:
+ seg_pred = classifier(points, to_categorical(label, num_classes))
+ vote_pool += seg_pred # added on probability
+
+ seg_pred = vote_pool / args.num_votes
+ cur_pred_val_logits = seg_pred.cpu().data.numpy()
+ cur_pred_val = np.zeros((cur_batch_size, num_point)).astype(np.int32)
+
+ for i in range(cur_batch_size):
+ cat = seg_label_to_cat[target[i, 0]] # str, shape name
+ logits = cur_pred_val_logits[i, :, :] # array, (num point, num part)
+ cur_pred_val[i, :] = np.argmax(logits[:, seg_classes[cat]], 1) + seg_classes[cat][0]
+ # only consider parts from that shape
+
+ ''' === calculate accuracy === '''
+ total_correct += np.sum(cur_pred_val == target)
+ total_seen += (cur_batch_size * num_point)
+
+ for l in range(num_part):
+ total_seen_class[l] += np.sum(target == l)
+ total_correct_class[l] += (np.sum((cur_pred_val == l) & (target == l)))
+
+ ''' === calculate iou === '''
+ for i in range(cur_batch_size):
+ segl = target[i, :] # array, (num point, )
+ segp = cur_pred_val[i, :] # array, (num point, )
+ cat = seg_label_to_cat[segl[0]] # str, shape name
+ part_ious = [0. for _ in range(len(seg_classes[cat]))] # parts belong to that shape
+
+ for l in seg_classes[cat]:
+ if (np.sum(segl == l) == 0) and (np.sum(segp == l) == 0): # no prediction or gt
+ part_ious[l - seg_classes[cat][0]] = 1.0
+ else:
+ iou = np.sum((segl == l) & (segp == l)) / float(np.sum((segl == l) | (segp == l)))
+ part_ious[l - seg_classes[cat][0]] = iou
+ shape_ious[cat].append(np.mean(part_ious))
+
+ all_shape_ious = []
+ for cat in shape_ious.keys():
+ for iou in shape_ious[cat]:
+ all_shape_ious.append(iou)
+ shape_ious[cat] = np.mean(shape_ious[cat])
+
+ mean_shape_ious = np.mean(list(shape_ious.values()))
+ test_metrics['class_avg_iou'] = mean_shape_ious
+ test_metrics['instance_avg_iou'] = np.mean(all_shape_ious)
+ test_metrics['accuracy'] = total_correct / float(total_seen)
+ test_metrics['class_avg_accuracy'] = np.mean(
+ np.array(total_correct_class) / np.array(total_seen_class, dtype=np.float))
+ for cat in sorted(shape_ious.keys()):
+ MyLogger.logger.info('test mIoU of %-14s %f' % (cat, shape_ious[cat]))
+
+ MyLogger.logger.info('Accuracy is: %.5f' % test_metrics['accuracy'])
+ MyLogger.logger.info('Class avg accuracy is: %.5f' % test_metrics['class_avg_accuracy'])
+ MyLogger.logger.info('Class avg mIoU is: %.5f' % test_metrics['class_avg_iou'])
+ MyLogger.logger.info('Instance avg mIoU is: %.5f' % test_metrics['instance_avg_iou'])
+ sys.exit("Test Finished")
+
+ if not args.use_sgd:
+ optimizer = torch.optim.Adam(
+ classifier.parameters(),
+ lr=args.lr,
+ betas=(0.9, 0.999),
+ eps=1e-08,
+ weight_decay=1e-4)
+ else:
+ optimizer = torch.optim.SGD(classifier.parameters(),
+ lr=args.lr * 100,
+ momentum=args.momentum,
+ weight_decay=1e-4)
+ if args.scheduler is 'step':
+ scheduler = StepLR(optimizer, step_size=args.step_size, gamma=args.lr_decay)
+ else:
+ scheduler = CosineAnnealingLR(optimizer, T_max=args.epoch, eta_min=1e-3)
+
+ LEARNING_RATE_CLIP = 1e-5
+ MOMENTUM_ORIGINAL = 0.1
+ MOMENTUM_DECAY = 0.5
+ MOMENTUM_DECAY_STEP = args.step_size
+
+ for epoch in range(MyLogger.epoch, args.epoch + 1):
+
+ MyLogger.epoch_init()
+
+ for points, label, target in tqdm(trainDataLoader, total=len(trainDataLoader), smoothing=0.9):
+
+ if args.data_aug:
+ points = points.data.numpy()
+ points[:, :, :3] = random_scale_point_cloud(points[:, :, 0:3])
+ points[:, :, :3] = random_shift_point_cloud(points[:, :, 0:3])
+ points = torch.Tensor(points)
+
+ points, label, target = points.transpose(2, 1).float().cuda(), label.long().cuda(), \
+ target.view(-1, 1)[:, 0].long().cuda()
+ classifier.train()
+ optimizer.zero_grad()
+ if args.model == 'pointnet_partseg':
+ seg_pred, trans_feat = classifier(points, to_categorical(label, num_classes))
+ seg_pred = seg_pred.contiguous().view(-1, num_part)
+ loss = criterion(seg_pred, target, trans_feat)
+ else:
+ seg_pred = classifier(points, to_categorical(label, num_classes))
+ seg_pred = seg_pred.contiguous().view(-1, num_part)
+ loss = criterion(seg_pred, target)
+
+ loss.backward()
+ optimizer.step()
+ MyLogger.step_update(seg_pred.data.max(1)[1].cpu().numpy(), target.long().cpu().numpy(), loss.cpu().detach().numpy())
+ MyLogger.epoch_summary(writer=writer, training=True, mode='partseg')
+
+ '''=== Evaluating ==='''
+ with torch.no_grad():
+
+ classifier.eval()
+ MyLogger.epoch_init(training=False)
+
+ for points, label, target in tqdm(testDataLoader, total=len(testDataLoader), smoothing=0.9):
+ cur_batch_size, NUM_POINT, _ = points.size()
+ points, label, target = points.transpose(2, 1).float().cuda(), label.long().cuda(), \
+ target.view(-1, 1)[:, 0].long().cuda()
+ if args.model == 'pointnet_partseg':
+ seg_pred, trans_feat = classifier(points, to_categorical(label, num_classes))
+ seg_pred = seg_pred.contiguous().view(-1, num_part)
+ loss = criterion(seg_pred, target, trans_feat)
+ else:
+ seg_pred = classifier(points, to_categorical(label, num_classes))
+ seg_pred = seg_pred.contiguous().view(-1, num_part)
+ loss = criterion(seg_pred, target)
+
+ MyLogger.step_update(seg_pred.data.max(1)[1].cpu().numpy(), target.long().cpu().numpy(), loss.cpu().detach().numpy())
+
+ MyLogger.epoch_summary(writer=writer, training=False, mode='partseg')
+
+ if MyLogger.save_model:
+ state = {
+ 'step': MyLogger.step,
+ 'miou': MyLogger.best_miou,
+ 'epoch': MyLogger.best_miou_epoch,
+ 'model_state_dict': classifier.state_dict(),
+ 'optimizer_state_dict': optimizer.state_dict()}
+ torch.save(state, MyLogger.savepath)
+
+ if epoch % 5 == 0:
+ state = {
+ 'model_state_dict': classifier.state_dict(),
+ 'optimizer_state_dict': optimizer.state_dict()}
+ torch.save(state, MyLogger.savepath.replace('best_model', 'model_ep%d' % epoch))
+
+ scheduler.step()
+ if args.scheduler == 'step':
+ for param_group in optimizer.param_groups:
+ if optimizer.param_groups[0]['lr'] < LEARNING_RATE_CLIP:
+ param_group['lr'] = LEARNING_RATE_CLIP
+ if args.bn_decay:
+ momentum = MOMENTUM_ORIGINAL * (MOMENTUM_DECAY ** (epoch // MOMENTUM_DECAY_STEP))
+ if momentum < 0.01:
+ momentum = 0.01
+ print('BN momentum updated to: %f' % momentum)
+ classifier = classifier.apply(lambda x: bn_momentum_adjust(x, momentum))
+
+
+if __name__ == '__main__':
+ args = parse_args()
+ main(args)
diff --git a/zoo/OcCo/OcCo_Torch/train_dgcnn.sh b/zoo/OcCo/OcCo_Torch/train_dgcnn.sh
new file mode 100644
index 0000000..9b02afc
--- /dev/null
+++ b/zoo/OcCo/OcCo_Torch/train_dgcnn.sh
@@ -0,0 +1,10 @@
+python train.py \
+ --gpu 1,5,6,7 \
+ --use_sgd \
+ --xavier_init \
+ --scheduler cos \
+ --model dgcnn_partseg \
+ --log_dir occo_dgcnn \
+ --batch_size 16 \
+ --restore \
+ --restore_path /mnt/lustre/ldkong/models/OcCo/OcCo_Torch/pretrain/dgcnn_occo_seg.pth
\ No newline at end of file
diff --git a/zoo/OcCo/OcCo_Torch/train_partseg.py b/zoo/OcCo/OcCo_Torch/train_partseg.py
new file mode 100644
index 0000000..71375bf
--- /dev/null
+++ b/zoo/OcCo/OcCo_Torch/train_partseg.py
@@ -0,0 +1,300 @@
+# Copyright (c) 2020. Hanchen Wang, hw501@cam.ac.uk
+# Ref: https://github.com/yanx27/Pointnet_Pointnet2_pytorch/blob/master/train_partseg.py
+
+import os, sys, torch, shutil, importlib, argparse, numpy as np
+sys.path.append('utils')
+sys.path.append('models')
+from PC_Augmentation import random_scale_point_cloud, random_shift_point_cloud
+from Torch_Utility import copy_parameters, weights_init, bn_momentum_adjust
+from torch.optim.lr_scheduler import StepLR, CosineAnnealingLR
+from torch.utils.tensorboard import SummaryWriter
+from ShapeNetDataLoader import PartNormalDataset
+from torch.utils.data import DataLoader
+from TrainLogger import TrainLogger
+from tqdm import tqdm
+
+
+seg_classes = {'Earphone': [16, 17, 18],
+ 'Motorbike': [30, 31, 32, 33, 34, 35],
+ 'Rocket': [41, 42, 43],
+ 'Car': [8, 9, 10, 11],
+ 'Laptop': [28, 29],
+ 'Cap': [6, 7],
+ 'Skateboard': [44, 45, 46],
+ 'Mug': [36, 37],
+ 'Guitar': [19, 20, 21],
+ 'Bag': [4, 5],
+ 'Lamp': [24, 25, 26, 27],
+ 'Table': [47, 48, 49],
+ 'Airplane': [0, 1, 2, 3],
+ 'Pistol': [38, 39, 40],
+ 'Chair': [12, 13, 14, 15],
+ 'Knife': [22, 23]}
+seg_label_to_cat = {} # {0:Airplane, 1:Airplane, ..., 49:Table}
+for cat in seg_classes.keys():
+ for label in seg_classes[cat]:
+ seg_label_to_cat[label] = cat
+
+
+def parse_args():
+ parser = argparse.ArgumentParser('Model')
+ parser.add_argument('--log_dir', type=str, help='log folder [default: ]')
+ parser.add_argument('--gpu', type=str, default='0', help='GPU [default: 0]')
+ parser.add_argument('--mode', type=str, default='train', help='train or test')
+ parser.add_argument('--epoch', default=250, type=int, help=' epochs [default: 250]')
+ parser.add_argument('--batch_size', type=int, default=16, help='batch size [default: 16]')
+ parser.add_argument('--lr', default=0.001, type=float, help='learning rate [default: 0.001]')
+ parser.add_argument('--momentum', type=float, default=0.9, help='SGD momentum [default: 0.9]')
+ parser.add_argument('--restore_path', type=str, help='path to pretrained weights [default: ]')
+ parser.add_argument('--lr_decay', type=float, default=0.5, help='lr decay rate [default: 0.5]')
+ parser.add_argument('--num_point', type=int, default=2048, help='point number [default: 2048]')
+ parser.add_argument('--restore', action='store_true', help='using pre-trained [default: False]')
+ parser.add_argument('--use_sgd', action='store_true', help='use SGD optimiser [default: False]')
+ parser.add_argument('--data_aug', action='store_true', help='data augmentation [default: False]')
+ parser.add_argument('--scheduler', default='step', help='learning rate scheduler [default: step]')
+ parser.add_argument('--model', default='pointnet_partseg', help='model [default: pointnet_partseg]')
+ parser.add_argument('--dropout', type=float, default=0.5, help='dropout rate in FCs [default: 0.5]')
+ parser.add_argument('--bn_decay', action='store_true', help='use BN nomentum decay [default: False]')
+ parser.add_argument('--xavier_init', action='store_true', help='Xavier weight init [default: False]')
+ parser.add_argument('--emb_dims', type=int, default=1024, help='embedding dimensions [default: 1024]')
+ parser.add_argument('--k', type=int, default=20, help='num of nearest neighbors to use [default: 20]')
+ parser.add_argument('--normal', action='store_true', default=False, help='use normal [default: False]')
+ parser.add_argument('--step_size', type=int, default=20, help='lr decay step [default: every 20 epochs]')
+ parser.add_argument('--num_votes', type=int, default=3, help='aggregate test predictions via vote [default: 3]')
+
+ return parser.parse_args()
+
+
+def main(args):
+ os.environ["CUDA_DEVICE_ORDER"] = "PCI_BUS_ID"
+ os.environ["CUDA_VISIBLE_DEVICES"] = args.gpu
+
+ def to_categorical(y, num_class):
+ """ 1-hot encodes a tensor """
+ new_y = torch.eye(num_class)[y.cpu().data.numpy(), ]
+ if y.is_cuda:
+ return new_y.cuda()
+ return new_y
+
+ ''' === Set up Loggers and Load Data === '''
+ MyLogger = TrainLogger(args, name=args.model.upper(), subfold='partseg',
+ filename=args.mode + '_log', cls2name=seg_label_to_cat)
+ writer = SummaryWriter(os.path.join(MyLogger.experiment_dir, 'runs'))
+ root = 'data/shapenetcore_partanno_segmentation_benchmark_v0_normal/'
+
+ TRAIN_DATASET = PartNormalDataset(root=root, num_point=args.num_point, split='trainval', use_normal=args.normal)
+ TEST_DATASET = PartNormalDataset(root=root, num_point=args.num_point, split='test', use_normal=args.normal)
+ trainDataLoader = DataLoader(TRAIN_DATASET, batch_size=args.batch_size, shuffle=True, num_workers=4)
+ testDataLoader = DataLoader(TEST_DATASET, batch_size=args.batch_size, shuffle=False, num_workers=4)
+
+ num_classes, num_part = 16, 50
+
+ ''' === Load Model and Backup Scripts === '''
+ channel_num = 6 if args.normal else 3
+ MODEL = importlib.import_module(args.model)
+ shutil.copy(os.path.abspath(__file__), MyLogger.log_dir)
+ shutil.copy('./models/%s.py' % args.model, MyLogger.log_dir)
+ device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
+ classifier = MODEL.get_model(part_num=num_part, num_channel=channel_num, args=args).cuda().to(device)
+ criterion = MODEL.get_loss().to(device)
+ classifier = torch.nn.DataParallel(classifier)
+
+ if args.restore:
+ checkpoint = torch.load(args.restore_path)
+ classifier = copy_parameters(classifier, checkpoint, verbose=True)
+ MyLogger.logger.info('Use pre-trained weights from %s' % args.restore_path)
+ else:
+ MyLogger.logger.info('No pre-trained weights, start training from scratch...')
+ if args.xavier_init:
+ classifier = classifier.apply(weights_init)
+ MyLogger.logger.info("Using Xavier weight initialisation")
+
+ if args.mode == 'test':
+ MyLogger.logger.info('\n\n')
+ MyLogger.logger.info('=' * 33)
+ MyLogger.logger.info('load parrameters from %s' % args.restore_path)
+ with torch.no_grad():
+ test_metrics = {}
+ total_correct, total_seen = 0, 0
+ total_seen_class = [0 for _ in range(num_part)]
+ total_correct_class = [0 for _ in range(num_part)]
+ shape_ious = {cat: [] for cat in seg_classes.keys()} # {shape: []}
+
+ for points, label, target in tqdm(testDataLoader, total=len(testDataLoader), smoothing=0.9):
+ classifier.eval()
+ cur_batch_size, num_point, _ = points.size()
+ vote_pool = torch.zeros(cur_batch_size, num_point, num_part).cuda() # (batch, num point, num part)
+ points, label, target = points.transpose(2, 1).float().cuda(), label.long().cuda(), target.numpy()
+
+ ''' === generate predictions from raw output (multiple via voting) === '''
+ for _ in range(args.num_votes):
+ if args.model == 'pointnet_partseg':
+ seg_pred, _ = classifier(points, to_categorical(label, num_classes))
+ else:
+ seg_pred = classifier(points, to_categorical(label, num_classes))
+ vote_pool += seg_pred # added on probability
+
+ seg_pred = vote_pool / args.num_votes
+ cur_pred_val_logits = seg_pred.cpu().data.numpy()
+ cur_pred_val = np.zeros((cur_batch_size, num_point)).astype(np.int32)
+
+ for i in range(cur_batch_size):
+ cat = seg_label_to_cat[target[i, 0]] # str, shape name
+ logits = cur_pred_val_logits[i, :, :] # array, (num point, num part)
+ cur_pred_val[i, :] = np.argmax(logits[:, seg_classes[cat]], 1) + seg_classes[cat][0]
+ # only consider parts from that shape
+
+ ''' === calculate accuracy === '''
+ total_correct += np.sum(cur_pred_val == target)
+ total_seen += (cur_batch_size * num_point)
+
+ for l in range(num_part):
+ total_seen_class[l] += np.sum(target == l)
+ total_correct_class[l] += (np.sum((cur_pred_val == l) & (target == l)))
+
+ ''' === calculate iou === '''
+ for i in range(cur_batch_size):
+ segl = target[i, :] # array, (num point, )
+ segp = cur_pred_val[i, :] # array, (num point, )
+ cat = seg_label_to_cat[segl[0]] # str, shape name
+ part_ious = [0. for _ in range(len(seg_classes[cat]))] # parts belong to that shape
+
+ for l in seg_classes[cat]:
+ if (np.sum(segl == l) == 0) and (np.sum(segp == l) == 0): # no prediction or gt
+ part_ious[l - seg_classes[cat][0]] = 1.0
+ else:
+ iou = np.sum((segl == l) & (segp == l)) / float(np.sum((segl == l) | (segp == l)))
+ part_ious[l - seg_classes[cat][0]] = iou
+ shape_ious[cat].append(np.mean(part_ious))
+
+ all_shape_ious = []
+ for cat in shape_ious.keys():
+ for iou in shape_ious[cat]:
+ all_shape_ious.append(iou)
+ shape_ious[cat] = np.mean(shape_ious[cat])
+
+ mean_shape_ious = np.mean(list(shape_ious.values()))
+ test_metrics['class_avg_iou'] = mean_shape_ious
+ test_metrics['instance_avg_iou'] = np.mean(all_shape_ious)
+ test_metrics['accuracy'] = total_correct / float(total_seen)
+ test_metrics['class_avg_accuracy'] = np.mean(
+ np.array(total_correct_class) / np.array(total_seen_class, dtype=np.float))
+ for cat in sorted(shape_ious.keys()):
+ MyLogger.logger.info('test mIoU of %-14s %f' % (cat, shape_ious[cat]))
+
+ MyLogger.logger.info('Accuracy is: %.5f' % test_metrics['accuracy'])
+ MyLogger.logger.info('Class avg accuracy is: %.5f' % test_metrics['class_avg_accuracy'])
+ MyLogger.logger.info('Class avg mIoU is: %.5f' % test_metrics['class_avg_iou'])
+ MyLogger.logger.info('Instance avg mIoU is: %.5f' % test_metrics['instance_avg_iou'])
+ sys.exit("Test Finished")
+
+ if not args.use_sgd:
+ optimizer = torch.optim.Adam(
+ classifier.parameters(),
+ lr=args.lr,
+ betas=(0.9, 0.999),
+ eps=1e-08,
+ weight_decay=1e-4)
+ else:
+ optimizer = torch.optim.SGD(classifier.parameters(),
+ lr=args.lr * 100,
+ momentum=args.momentum,
+ weight_decay=1e-4)
+ if args.scheduler is 'step':
+ scheduler = StepLR(optimizer, step_size=args.step_size, gamma=args.lr_decay)
+ else:
+ scheduler = CosineAnnealingLR(optimizer, T_max=args.epoch, eta_min=1e-3)
+
+ LEARNING_RATE_CLIP = 1e-5
+ MOMENTUM_ORIGINAL = 0.1
+ MOMENTUM_DECAY = 0.5
+ MOMENTUM_DECAY_STEP = args.step_size
+
+ for epoch in range(MyLogger.epoch, args.epoch + 1):
+
+ MyLogger.epoch_init()
+
+ for points, label, target in tqdm(trainDataLoader, total=len(trainDataLoader), smoothing=0.9):
+
+ if args.data_aug:
+ points = points.data.numpy()
+ points[:, :, :3] = random_scale_point_cloud(points[:, :, 0:3])
+ points[:, :, :3] = random_shift_point_cloud(points[:, :, 0:3])
+ points = torch.Tensor(points)
+
+ points, label, target = points.transpose(2, 1).float().cuda(), label.long().cuda(), \
+ target.view(-1, 1)[:, 0].long().cuda()
+ classifier.train()
+ optimizer.zero_grad()
+ if args.model == 'pointnet_partseg':
+ seg_pred, trans_feat = classifier(points, to_categorical(label, num_classes))
+ seg_pred = seg_pred.contiguous().view(-1, num_part)
+ loss = criterion(seg_pred, target, trans_feat)
+ else:
+ seg_pred = classifier(points, to_categorical(label, num_classes))
+ seg_pred = seg_pred.contiguous().view(-1, num_part)
+ loss = criterion(seg_pred, target)
+
+ loss.backward()
+ optimizer.step()
+ MyLogger.step_update(seg_pred.data.max(1)[1].cpu().numpy(),
+ target.long().cpu().numpy(),
+ loss.cpu().detach().numpy())
+ MyLogger.epoch_summary(writer=writer, training=True, mode='partseg')
+
+ '''=== Evaluating ==='''
+ with torch.no_grad():
+
+ classifier.eval()
+ MyLogger.epoch_init(training=False)
+
+ for points, label, target in tqdm(testDataLoader, total=len(testDataLoader), smoothing=0.9):
+ cur_batch_size, NUM_POINT, _ = points.size()
+ points, label, target = points.transpose(2, 1).float().cuda(), label.long().cuda(), \
+ target.view(-1, 1)[:, 0].long().cuda()
+ if args.model == 'pointnet_partseg':
+ seg_pred, trans_feat = classifier(points, to_categorical(label, num_classes))
+ seg_pred = seg_pred.contiguous().view(-1, num_part)
+ loss = criterion(seg_pred, target, trans_feat)
+ else:
+ seg_pred = classifier(points, to_categorical(label, num_classes))
+ seg_pred = seg_pred.contiguous().view(-1, num_part)
+ loss = criterion(seg_pred, target)
+
+ MyLogger.step_update(seg_pred.data.max(1)[1].cpu().numpy(),
+ target.long().cpu().numpy(),
+ loss.cpu().detach().numpy())
+
+ MyLogger.epoch_summary(writer=writer, training=False, mode='partseg')
+
+ if MyLogger.save_model:
+ state = {
+ 'step': MyLogger.step,
+ 'miou': MyLogger.best_miou,
+ 'epoch': MyLogger.best_miou_epoch,
+ 'model_state_dict': classifier.state_dict(),
+ 'optimizer_state_dict': optimizer.state_dict()}
+ torch.save(state, MyLogger.savepath)
+
+ if epoch % 5 == 0:
+ state = {
+ 'model_state_dict': classifier.state_dict(),
+ 'optimizer_state_dict': optimizer.state_dict()}
+ torch.save(state, MyLogger.savepath.replace('best_model', 'model_ep%d' % epoch))
+
+ scheduler.step()
+ if args.scheduler == 'step':
+ for param_group in optimizer.param_groups:
+ if optimizer.param_groups[0]['lr'] < LEARNING_RATE_CLIP:
+ param_group['lr'] = LEARNING_RATE_CLIP
+ if args.bn_decay:
+ momentum = MOMENTUM_ORIGINAL * (MOMENTUM_DECAY ** (epoch // MOMENTUM_DECAY_STEP))
+ if momentum < 0.01:
+ momentum = 0.01
+ print('BN momentum updated to: %f' % momentum)
+ classifier = classifier.apply(lambda x: bn_momentum_adjust(x, momentum))
+
+
+if __name__ == '__main__':
+ args = parse_args()
+ main(args)
diff --git a/zoo/OcCo/OcCo_Torch/train_pcn.sh b/zoo/OcCo/OcCo_Torch/train_pcn.sh
new file mode 100644
index 0000000..94cc778
--- /dev/null
+++ b/zoo/OcCo/OcCo_Torch/train_pcn.sh
@@ -0,0 +1,11 @@
+python train.py \
+ --gpu 0,4 \
+ --bn_decay \
+ --xavier_init \
+ --scheduler cos \
+ --model pcn_partseg \
+ --batch_size 16 \
+ --epoch 300 \
+ --log_dir occo_pcn \
+ --restore \
+ --restore_path /mnt/lustre/ldkong/models/OcCo/OcCo_Torch/pretrain/pcn_occo_seg.pth
\ No newline at end of file
diff --git a/zoo/OcCo/OcCo_Torch/train_pointnet.sh b/zoo/OcCo/OcCo_Torch/train_pointnet.sh
new file mode 100644
index 0000000..c125282
--- /dev/null
+++ b/zoo/OcCo/OcCo_Torch/train_pointnet.sh
@@ -0,0 +1,11 @@
+python train.py \
+ --gpu 0,6 \
+ --bn_decay \
+ --xavier_init \
+ --scheduler cos \
+ --model pointnet_partseg \
+ --batch_size 32 \
+ --epoch 300 \
+ --log_dir occo_pointnet_run2 \
+ --restore \
+ --restore_path /mnt/lustre/ldkong/models/OcCo/OcCo_Torch/pretrain/pointnet_occo_seg.pth
\ No newline at end of file
diff --git a/zoo/OcCo/OcCo_Torch/utils/3DPC_Data_Gen.py b/zoo/OcCo/OcCo_Torch/utils/3DPC_Data_Gen.py
new file mode 100644
index 0000000..3b83836
--- /dev/null
+++ b/zoo/OcCo/OcCo_Torch/utils/3DPC_Data_Gen.py
@@ -0,0 +1,87 @@
+# Copyright (c) 2020. Author: Hanchen Wang, hc.wang96@gmail.com
+# Generating Training Data of 3D Point Cloud for 3D Jigsaw Puzzles
+
+import os, h5py, numpy as np
+
+'''
+The 3D object/block is split into voxels along axes,
+each point is assigned with a voxel label.
+'''
+
+
+def pc_ssl_3djigsaw_gen(pc_xyz, k=2, edge_len=1):
+ """
+ :param pc_xyz: point cloud, (n_point, 3 + additional feature)
+ :param k: number of voxels along each axis
+ :param edge_len: length of voxel (cube) edge
+ :return: permuted pc, labels
+ """
+ intervals = [edge_len*2 / k * x - edge_len for x in np.arange(k + 1)]
+ assert edge_len >= pc_xyz.__abs__().max()
+ indices = np.searchsorted(intervals, pc_xyz, side='left') - 1
+ label = indices[:, 0] * k ** 2 + indices[:, 1] * k + indices[:, 2]
+
+ shuffle_indices = np.arange(k ** 3)
+ np.random.shuffle(shuffle_indices)
+ shuffled_dict = dict()
+ for i, d in enumerate(shuffle_indices):
+ shuffled_dict[i] = d
+
+ def numberToBase(n, base=k):
+ if n == 0:
+ return [0]
+ digits = []
+ while n:
+ digits.append(str(int(n % base)))
+ n //= base
+ return int("".join(digits[::-1]))
+
+ for voxel_id in range(k ** 3):
+ selected_points = (label == voxel_id)
+ permutated_places = shuffled_dict[voxel_id]
+ loc = permutated_places
+ center_diff = np.array([(loc // k ** 2) - (voxel_id // k ** 2),
+ (loc // k ** 2) // k - (voxel_id // k ** 2) // k,
+ loc % k - voxel_id % k]) * (2 * edge_len)/k # + const - edge_len
+ pc_xyz[selected_points] = pc_xyz[selected_points] + center_diff
+
+ return pc_xyz, label
+
+
+if __name__ == "__main__":
+ root_dir = r'./data/modelnet40_ply_hdf5_2048'
+ dir_path = r'./data/modelnet40_ply_hdf5_2048/jigsaw/k2'
+ os.mkdir(dir_path) if not os.path.exists(dir_path) else None
+
+ TRAIN_FILES = [item.strip() for item in open(os.path.join(root_dir, 'train_files.txt')).readlines()]
+ VALID_FILES = [item.strip() for item in open(os.path.join(root_dir, 'test_files.txt')).readlines()]
+
+
+ def loadh5DataFile(PathtoFile):
+ f = h5py.File(PathtoFile, 'r')
+ return f['data'][:], f['label'][:]
+
+
+ def reduce2fix(pc, n_points=1024):
+ indices = np.arange(len(pc))
+ np.random.shuffle(indices)
+ return pc[indices[:n_points]]
+
+
+ for file_ in VALID_FILES:
+ filename = file_.split('/')[-1]
+ print(filename)
+ data, _ = loadh5DataFile(file_)
+ # subsample all point clouds into 1024 points of each
+ data = np.apply_along_axis(reduce2fix, axis=1, arr=data)
+ shuffled_data = np.zeros_like(data)
+ shuffled_label = np.zeros((data.shape[0], data.shape[1]))
+ for idx, pc_xyz in enumerate(data):
+ pc_xyz, label = pc_ssl_3djigsaw_gen(pc_xyz, k=2, edge_len=1)
+ shuffled_data[idx] = pc_xyz
+ shuffled_label[idx] = label
+ hf = h5py.File(os.path.join(dir_path, filename), 'w')
+
+ hf.create_dataset('label', data=shuffled_label)
+ hf.create_dataset('data', data=shuffled_data)
+ hf.close()
diff --git a/zoo/OcCo/OcCo_Torch/utils/Dataset_Loc.py b/zoo/OcCo/OcCo_Torch/utils/Dataset_Loc.py
new file mode 100644
index 0000000..8a54760
--- /dev/null
+++ b/zoo/OcCo/OcCo_Torch/utils/Dataset_Loc.py
@@ -0,0 +1,60 @@
+# Copyright (c) 2020. Author: Hanchen Wang, hc.wang96@gmail.com
+# Modify the path w.r.t your own settings
+
+
+def Dataset_Loc(dataset, fname, partial=True, bn=False, few_shot=False):
+ def fetch_files(filelist):
+ return [item.strip() for item in open(filelist).readlines()]
+
+ dataset = dataset.lower()
+
+ if dataset == 'shapenet8':
+ NUM_CLASSES = 8
+ if partial:
+ TRAIN_FILES = fetch_files('./data/shapenet/hdf5_partial_1024/train_file.txt')
+ VALID_FILES = fetch_files('./data/shapenet/hdf5_partial_1024/valid_file.txt')
+ else:
+ raise ValueError("For ShapeNet we are only interested in the partial objects recognition")
+
+ elif dataset == 'shapenet10':
+ NUM_CLASSES = 10
+ TRAIN_FILES = fetch_files('./data/ShapeNet10/Cleaned/train_file.txt')
+ VALID_FILES = fetch_files('./data/ShapeNet10/Cleaned/test_file.txt')
+
+ # elif dataset == 'modelnet10':
+ # NUM_CLASSES = 10
+ # TRAIN_FILES = fetch_files('./data/ModelNet10/Cleaned/train_file.txt')
+ # VALID_FILES = fetch_files('./data/ModelNet10/Cleaned/test_file.txt')
+
+ elif dataset == 'modelnet40':
+ '''Actually we find that using data from PointNet++: '''
+ NUM_CLASSES = 40
+ if partial:
+ TRAIN_FILES = fetch_files('./data/modelnet40_pcn/hdf5_partial_1024/train_file.txt')
+ VALID_FILES = fetch_files('./data/modelnet40_pcn/hdf5_partial_1024/test_file.txt')
+ else:
+ VALID_FILES = fetch_files('./data/modelnet40_ply_hdf5_2048/test_files.txt')
+ if few_shot:
+ TRAIN_FILES = fetch_files('./data/modelnet40_ply_hdf5_2048/few_labels/%s.h5' % fname)
+ else:
+ TRAIN_FILES = fetch_files('./data/modelnet40_ply_hdf5_2048/train_files.txt')
+
+ elif dataset == 'scannet10':
+ NUM_CLASSES = 10
+ TRAIN_FILES = fetch_files('./data/ScanNet10/ScanNet_Cleaned/train_file.txt')
+ VALID_FILES = fetch_files('./data/ScanNet10/ScanNet_Cleaned/test_file.txt')
+
+ elif dataset == 'scanobjectnn':
+ NUM_CLASSES = 15
+ if bn:
+ TRAIN_FILES = ['./data/ScanNetObjectNN/h5_files/main_split/training_objectdataset' + fname + '_1024.h5']
+ VALID_FILES = ['./data/ScanNetObjectNN/h5_files/main_split/test_objectdataset' + fname + '_1024.h5']
+
+ else:
+ TRAIN_FILES = ['./data/ScanNetObjectNN/h5_files/main_split_nobg/training_objectdataset' + fname + '_1024.h5']
+ VALID_FILES = ['./data/ScanNetObjectNN/h5_files/main_split_nobg/test_objectdataset' + fname + '_1024.h5']
+
+ else:
+ raise ValueError('dataset not exists')
+
+ return NUM_CLASSES, TRAIN_FILES, VALID_FILES
diff --git a/zoo/OcCo/OcCo_Torch/utils/Inference_Timer.py b/zoo/OcCo/OcCo_Torch/utils/Inference_Timer.py
new file mode 100644
index 0000000..dcaed83
--- /dev/null
+++ b/zoo/OcCo/OcCo_Torch/utils/Inference_Timer.py
@@ -0,0 +1,41 @@
+# Copyright (c) 2020. Hanchen Wang, hw501@cam.ac.uk
+
+import os, torch, time, numpy as np
+
+class Inference_Timer:
+ def __init__(self, args):
+ self.args = args
+ self.est_total = []
+ self.use_cpu = True if (self.args.gpu == 'None') else False
+ self.device = 'CPU' if self.use_cpu else 'GPU'
+ if self.use_cpu:
+ os.environ['OMP_NUM_THREADS'] = "1"
+ os.environ['MKL_NUM_THREADS'] = "1"
+ print('Now we calculate the inference time on a single CPU')
+ else:
+ print('Now we calculate the inference time on a single GPU')
+ self.args.batch_size, self.args.epoch = 2, 1
+ # 1D BatchNorm requires more than 1 sample to compute std
+ # ref: https://github.com/pytorch/pytorch/issues/7716
+ # otherwise remove the 1D BatchNorm,
+ # since its contribution to the inference is negligible
+ # ref:
+
+ def update_args(self):
+ return self.args
+
+ def single_step(self, model, data):
+ if not self.use_cpu:
+ torch.cuda.synchronize()
+ start = time.time()
+ output = model(data)
+ if not self.use_cpu:
+ torch.cuda.synchronize()
+ end = time.time()
+ self.est_total.append(end - start)
+ return output
+
+ def update_single_epoch(self, logger):
+ logger.info("Model: {}".format(self.args.model))
+ logger.info("Average Inference Time Per Example on Single {}: {:.3f} milliseconds".format(
+ self.device, np.mean(self.est_total)*1000))
diff --git a/zoo/OcCo/OcCo_Torch/utils/LMDB_DataFlow.py b/zoo/OcCo/OcCo_Torch/utils/LMDB_DataFlow.py
new file mode 100644
index 0000000..edb5750
--- /dev/null
+++ b/zoo/OcCo/OcCo_Torch/utils/LMDB_DataFlow.py
@@ -0,0 +1,89 @@
+# Copyright (c) 2020. Author: Hanchen Wang, hw501@cam.ac.uk
+# Ref: https://github.com/wentaoyuan/pcn/blob/master/data_util.py
+
+
+import numpy as np
+from tensorpack import dataflow
+
+
+def resample_pcd(pcd, n):
+ """drop or duplicate points so that input of each object has exactly n points"""
+ idx = np.random.permutation(pcd.shape[0])
+ if idx.shape[0] < n:
+ idx = np.concatenate([idx, np.random.randint(pcd.shape[0], size=n-pcd.shape[0])])
+ return pcd[idx[:n]]
+
+
+class PreprocessData(dataflow.ProxyDataFlow):
+
+ def __init__(self, ds, input_size, output_size):
+ super(PreprocessData, self).__init__(ds)
+ self.input_size = input_size
+ self.output_size = output_size
+
+ def get_data(self):
+ for id, input, gt in self.ds.get_data():
+ input = resample_pcd(input, self.input_size)
+ gt = resample_pcd(gt, self.output_size)
+ yield id, input, gt
+
+
+class BatchData(dataflow.ProxyDataFlow):
+ def __init__(self, ds, batch_size, input_size, gt_size, remainder=False, use_list=False):
+ super(BatchData, self).__init__(ds)
+ self.batch_size = batch_size
+ self.input_size = input_size
+ self.gt_size = gt_size
+ self.remainder = remainder
+ self.use_list = use_list
+
+ def __len__(self):
+ """get the number of batches"""
+ ds_size = len(self.ds)
+ div = ds_size // self.batch_size
+ rem = ds_size % self.batch_size
+ if rem == 0:
+ return div
+ return div + int(self.remainder) # int(False) == 0
+
+ def __iter__(self):
+ """generating data in batches"""
+ holder = []
+ for data in self.ds:
+ holder.append(data)
+ if len(holder) == self.batch_size:
+ yield self._aggregate_batch(holder, self.use_list)
+ del holder[:] # reset holder as empty list => holder = []
+ if self.remainder and len(holder) > 0:
+ yield self._aggregate_batch(holder, self.use_list)
+
+ def _aggregate_batch(self, data_holder, use_list=False):
+ """
+ Concatenate input points along the 0-th dimension
+ Stack all other data along the 0-th dimension
+ """
+ ids = np.stack([x[0] for x in data_holder])
+ inputs = [resample_pcd(x[1], self.input_size) if x[1].shape[0] > self.input_size else x[1]
+ for x in data_holder]
+ inputs = np.expand_dims(np.concatenate([x for x in inputs]), 0).astype(np.float32)
+ npts = np.stack([x[1].shape[0] if x[1].shape[0] < self.input_size else self.input_size
+ for x in data_holder]).astype(np.int32)
+ gts = np.stack([resample_pcd(x[2], self.gt_size) for x in data_holder]).astype(np.float32)
+ return ids, inputs, npts, gts
+
+
+def lmdb_dataflow(lmdb_path, batch_size, input_size, output_size, is_training, test_speed=False):
+ """ Load LMDB Files, then Generate Training Batches"""
+ df = dataflow.LMDBSerializer.load(lmdb_path, shuffle=False)
+ size = df.size()
+ if is_training:
+ df = dataflow.LocallyShuffleData(df, buffer_size=2000) # buffer_size
+ df = dataflow.PrefetchData(df, num_prefetch=500, num_proc=1) # multiprocess the data
+ df = BatchData(df, batch_size, input_size, output_size)
+ if is_training:
+ df = dataflow.PrefetchDataZMQ(df, num_proc=8)
+ df = dataflow.RepeatedData(df, -1)
+ if test_speed:
+ dataflow.TestDataSpeed(df, size=1000).start()
+ df.reset_state()
+ return df, size
diff --git a/zoo/OcCo/OcCo_Torch/utils/LMDB_Writer.py b/zoo/OcCo/OcCo_Torch/utils/LMDB_Writer.py
new file mode 100644
index 0000000..2c2d64c
--- /dev/null
+++ b/zoo/OcCo/OcCo_Torch/utils/LMDB_Writer.py
@@ -0,0 +1,52 @@
+# Copyright (c) 2020. Author: Hanchen Wang, hw501@cam.ac.uk
+
+import os, argparse, numpy as np
+from tensorpack import DataFlow, dataflow
+from open3d.open3d.io import read_triangle_mesh, read_point_cloud
+
+
+def sample_from_mesh(filename, num_samples=16384):
+ pcd = read_triangle_mesh(filename).sample_points_uniformly(number_of_points=num_samples)
+ return np.array(pcd.points)
+
+
+class pcd_df(DataFlow):
+ def __init__(self, model_list, num_scans, partial_dir, complete_dir, num_partial_points=1024):
+ self.model_list = [_file for _file in model_list if 'train' in _file]
+ self.num_scans = num_scans
+ self.partial_dir = partial_dir
+ self.complete_dir = complete_dir
+ self.num_ppoints = num_partial_points
+
+ def size(self):
+ return len(self.model_list) * self.num_scans
+
+ @staticmethod
+ def read_pcd(filename):
+ pcd = read_point_cloud(filename)
+ return np.array(pcd.points)
+
+ def get_data(self):
+ for model_id in self.model_list:
+ complete = sample_from_mesh(os.path.join(self.complete_dir, '%s.obj' % model_id))
+ for i in range(self.num_scans):
+ partial = self.read_pcd(os.path.join(self.partial_dir, model_id + '_%d.pcd' % i))
+ partial = partial[np.random.choice(len(partial), self.num_ppoints)]
+ yield model_id.replace('/', '_'), partial, complete
+
+
+if __name__ == '__main__':
+ parser = argparse.ArgumentParser()
+ parser.add_argument('--list_path', default=r'../render/ModelNet_flist_normalised.txt')
+ parser.add_argument('--num_scans', type=int, default=10)
+ parser.add_argument('--partial_dir', default=r'../render/dump_modelnet_normalised_supercoarse/pcd')
+ parser.add_argument('--complete_dir', default=r'../data/ModelNet40')
+ parser.add_argument('--output_file', default=r'../data/ModelNet40_train_1024_supercoarse.lmdb')
+ args = parser.parse_args()
+
+ with open(args.list_path) as file:
+ model_list = file.read().splitlines()
+ df = pcd_df(model_list, args.num_scans, args.partial_dir, args.complete_dir)
+ if os.path.exists(args.output_file):
+ os.system('rm %s' % args.output_file)
+ dataflow.LMDBSerializer.save(df, args.output_file)
diff --git a/zoo/OcCo/OcCo_Torch/utils/ModelNetDataLoader.py b/zoo/OcCo/OcCo_Torch/utils/ModelNetDataLoader.py
new file mode 100644
index 0000000..a17850b
--- /dev/null
+++ b/zoo/OcCo/OcCo_Torch/utils/ModelNetDataLoader.py
@@ -0,0 +1,175 @@
+# Copyright (c) 2020. Hanchen Wang, hw501@cam.ac.uk
+import os, torch, h5py, warnings, numpy as np
+from torch.utils.data import Dataset
+
+warnings.filterwarnings('ignore')
+
+
+def pc_normalize(pc):
+ centroid = np.mean(pc, axis=0)
+ pc -= centroid
+ m = np.max(np.sqrt(np.sum(pc ** 2, axis=1)))
+ pc = pc / m
+ return pc
+
+
+def farthest_point_sample(point, npoint):
+ """
+ Input:
+ xyz: point cloud data, [N, D]
+ npoint: number of samples
+ Return:
+ centroids: sampled point cloud index, [npoint, D]
+ """
+ N, D = point.shape
+ xyz = point[:, :3]
+ centroids = np.zeros((npoint,))
+ distance = np.ones((N,)) * 1e10
+ farthest = np.random.randint(0, N)
+ for i in range(npoint):
+ centroids[i] = farthest
+ centroid = xyz[farthest, :]
+ dist = np.sum((xyz - centroid) ** 2, -1)
+ mask = dist < distance
+ distance[mask] = dist[mask]
+ farthest = np.argmax(distance, -1)
+ point = point[centroids.astype(np.int32)]
+ return point
+
+
+class ModelNetDataLoader(Dataset):
+ def __init__(self, root, npoint=1024, split='train', uniform=False, normal_channel=True, cache_size=15000):
+ self.root = root
+ self.npoints = npoint
+ self.uniform = uniform
+ self.catfile = os.path.join(self.root, 'modelnet40_shape_names.txt')
+
+ self.cat = [line.rstrip() for line in open(self.catfile)]
+ self.classes = dict(zip(self.cat, range(len(self.cat))))
+ self.normal_channel = normal_channel
+
+ shape_ids = {'train': [line.rstrip() for line in open(os.path.join(self.root, 'modelnet40_train.txt'))],
+ 'test': [line.rstrip() for line in open(os.path.join(self.root, 'modelnet40_test.txt'))]}
+
+ assert (split == 'train' or split == 'test')
+ shape_names = ['_'.join(x.split('_')[0:-1]) for x in shape_ids[split]]
+ self.datapath = [(shape_names[i], os.path.join(self.root, shape_names[i], shape_ids[split][i]) + '.txt') for i
+ in range(len(shape_ids[split]))]
+ print('The size of %s data is %d' % (split, len(self.datapath)))
+
+ self.cache_size = cache_size # how many data points to cache in memory
+ self.cache = {} # from index to (point_set, cls) tuple
+
+ def __len__(self):
+ return len(self.datapath)
+
+ def _get_item(self, index):
+ if index in self.cache:
+ point_set, cls = self.cache[index]
+ else:
+ fn = self.datapath[index]
+ cls = self.classes[self.datapath[index][0]]
+ cls = np.array([cls]).astype(np.int32)
+ point_set = np.loadtxt(fn[1], delimiter=',').astype(np.float32)
+ if self.uniform:
+ point_set = farthest_point_sample(point_set, self.npoints)
+ else:
+ point_set = point_set[0:self.npoints, :]
+
+ point_set[:, 0:3] = pc_normalize(point_set[:, 0:3])
+
+ if not self.normal_channel:
+ point_set = point_set[:, 0:3]
+
+ if len(self.cache) < self.cache_size:
+ self.cache[index] = (point_set, cls)
+
+ return point_set, cls
+
+ def __getitem__(self, index):
+ return self._get_item(index)
+
+
+class General_CLSDataLoader_HDF5(Dataset):
+ def __init__(self, file_list, num_point=1024):
+ self.num_point = num_point
+ self.file_list = file_list
+ self.points_list = np.zeros((1, num_point, 3))
+ self.labels_list = np.zeros((1,))
+
+ for file in self.file_list:
+ data, label = self.loadh5DataFile(file)
+ self.points_list = np.concatenate(
+ [self.points_list, data[:, :self.num_point, :]], axis=0)
+ self.labels_list = np.concatenate([self.labels_list, label.ravel()], axis=0)
+
+ self.points_list = self.points_list[1:]
+ self.labels_list = self.labels_list[1:]
+ assert len(self.points_list) == len(self.labels_list)
+ print('Number of Objects: ', len(self.labels_list))
+
+ @staticmethod
+ def loadh5DataFile(PathtoFile):
+ f = h5py.File(PathtoFile, 'r')
+ return f['data'][:], f['label'][:]
+
+ def __len__(self):
+ return len(self.points_list)
+
+ def __getitem__(self, index):
+ point_xyz = self.points_list[index][:, 0:3]
+ point_label = self.labels_list[index].astype(np.int32)
+ return point_xyz, point_label
+
+
+class ModelNetJigsawDataLoader(Dataset):
+ def __init__(self, root=r'./data/modelnet40_ply_hdf5_2048/jigsaw',
+ n_points=1024, split='train', k=3):
+ self.npoints = n_points
+ self.root = root
+ self.split = split
+ assert split in ['train', 'test']
+ if self.split == 'train':
+ self.file_list = [d for d in os.listdir(root) if d.find('train') is not -1]
+ else:
+ self.file_list = [d for d in os.listdir(root) if d.find('test') is not -1]
+ self.points_list = np.zeros((1, n_points, 3))
+ self.labels_list = np.zeros((1, n_points))
+
+ for file in self.file_list:
+ file = os.path.join(root, file)
+ data, label = self.loadh5DataFile(file)
+ self.points_list = np.concatenate([self.points_list, data], axis=0) # .append(data)
+ self.labels_list = np.concatenate([self.labels_list, label], axis=0)
+ # self.labels_list.append(label)
+
+ self.points_list = self.points_list[1:]
+ self.labels_list = self.labels_list[1:]
+ assert len(self.points_list) == len(self.labels_list)
+ print('Number of %s Objects: '%self.split, len(self.labels_list))
+
+ # just use the simple weights
+ self.labelweights = np.ones(k ** 3)
+
+
+ @staticmethod
+ def loadh5DataFile(PathtoFile):
+ f = h5py.File(PathtoFile, 'r')
+ return f['data'][:], f['label'][:]
+
+ def __getitem__(self, index):
+ point_set = self.points_list[index][:, 0:3]
+ semantic_seg = self.labels_list[index].astype(np.int32)
+ return point_set, semantic_seg
+
+ def __len__(self):
+ return len(self.points_list)
+
+
+if __name__ == '__main__':
+
+ data = ModelNetDataLoader('/data/modelnet40_normal_resampled/', split='train', uniform=False, normal_channel=True, )
+ DataLoader = torch.utils.data.DataLoader(data, batch_size=12, shuffle=True)
+ for point, label in DataLoader:
+ print(point.shape)
+ print(label.shape)
diff --git a/zoo/OcCo/OcCo_Torch/utils/PC_Augmentation.py b/zoo/OcCo/OcCo_Torch/utils/PC_Augmentation.py
new file mode 100644
index 0000000..f56c698
--- /dev/null
+++ b/zoo/OcCo/OcCo_Torch/utils/PC_Augmentation.py
@@ -0,0 +1,79 @@
+# Copyright (c) 2020. Hanchen Wang, hw501@cam.ac.uk
+
+import numpy as np
+
+"""
+ ================================================
+ === Library for Point Cloud Utility Function ===
+ ================================================
+"""
+
+
+def pc_normalize(pc):
+ """ Normalise the Input Point Cloud into a Unit Sphere """
+ centroid = np.mean(pc, axis=0)
+ pc = pc - centroid
+ m = np.max(np.sqrt(np.sum(pc ** 2, axis=1)))
+ pc = pc / m
+ return pc
+
+
+def farthest_point_sample(point, npoint):
+ """ A Simple Yet Inefficient Farthest Point Sampling on Point Cloud """
+ N, D = point.shape
+ xyz = point[:, :3]
+ centroids = np.zeros((npoint,))
+ distance = np.ones((N,)) * 1e10
+ farthest = np.random.randint(0, N)
+ for i in range(npoint):
+ centroids[i] = farthest
+ centroid = xyz[farthest, :]
+ dist = np.sum((xyz - centroid) ** 2, -1)
+ mask = dist < distance
+ distance[mask] = dist[mask]
+ farthest = np.argmax(distance, -1)
+ point = point[centroids.astype(np.int32)]
+ return point
+
+
+def random_shift_point_cloud(batch_data, shift_range=0.1):
+ """ Shift the Point Cloud along the XYZ axis, magnitude is randomly sampled from [-0.1, 0.1] """
+ B, N, C = batch_data.shape
+ shifts = np.random.uniform(-shift_range, shift_range, (B, 3))
+ for batch_index in range(B):
+ batch_data[batch_index, :, :] += shifts[batch_index, :]
+ return batch_data
+
+
+def random_scale_point_cloud(batch_data, scale_low=0.8, scale_high=1.25):
+ """ Scale the Point Cloud Objects into a Random Magnitude between [0.8, 1.25] """
+ B, N, C = batch_data.shape
+ scales = np.random.uniform(scale_low, scale_high, B)
+ for batch_index in range(B):
+ batch_data[batch_index, :, :] *= scales[batch_index]
+ return batch_data
+
+
+def random_point_dropout(batch_pc, max_dropout_ratio=0.875):
+ """ Randomly Dropout out a Portion of Points, Ratio is Randomly Selected between [0, 0.875] """
+ for b in range(batch_pc.shape[0]):
+ dropout_ratio = np.random.random() * max_dropout_ratio
+ drop_idx = np.where(np.random.random((batch_pc.shape[1])) <= dropout_ratio)[0]
+ if len(drop_idx) > 0:
+ batch_pc[b, drop_idx, :] = batch_pc[b, 0, :] # set the rest as the first point
+ return batch_pc
+
+
+def translate_pointcloud_dgcnn(pointcloud):
+ """ Random Scaling + Translation, Deprecated """
+ xyz1 = np.random.uniform(low=2. / 3., high=3. / 2., size=[3])
+ xyz2 = np.random.uniform(low=-0.2, high=0.2, size=[3])
+ translated_pointcloud = np.add(np.multiply(pointcloud, xyz1), xyz2).astype('float32')
+ return translated_pointcloud
+
+
+def jitter_pointcloud_dgcnn(pointcloud, sigma=0.01, clip=0.02):
+ """ Random Jittering, Deprecated """
+ N, C = pointcloud.shape
+ pointcloud += np.clip(sigma * np.random.randn(N, C), -1 * clip, clip)
+ return pointcloud
diff --git a/zoo/OcCo/OcCo_Torch/utils/S3DISDataLoader.py b/zoo/OcCo/OcCo_Torch/utils/S3DISDataLoader.py
new file mode 100644
index 0000000..d859a1c
--- /dev/null
+++ b/zoo/OcCo/OcCo_Torch/utils/S3DISDataLoader.py
@@ -0,0 +1,411 @@
+# Copyright (c) 2020. Hanchen Wang, hw501@cam.ac.uk
+
+import os, sys, h5py, numpy as np
+from torch.utils.data import Dataset
+
+BASE_DIR = os.path.dirname(os.path.abspath(__file__))
+ROOT_DIR = os.path.dirname(BASE_DIR)
+sys.path.append(BASE_DIR)
+sys.path.append(ROOT_DIR)
+root = '../data/stanford_indoor3d/'
+
+# 13 classes, as noted in the meta/s3dis/class_names.txt
+num_per_class = np.array([3370714, 2856755, 4919229, 318158, 375640, 478001, 974733,
+ 650464, 791496, 88727, 1284130, 229758, 2272837], dtype=np.int32)
+num_per_class_dict = {}
+for cls, num_cls in enumerate(num_per_class):
+ num_per_class_dict[cls] = num_cls
+
+
+class S3DISDataset_HDF5(Dataset):
+ """Chopped Scene"""
+
+ def __init__(self, root='data/indoor3d_sem_seg_hdf5_data', split='train', test_area=5):
+ self.root = root
+ self.all_files = self.getDataFiles(os.path.join(self.root, 'all_files.txt'))
+ self.room_filelist = self.getDataFiles(os.path.join(self.root, 'room_filelist.txt'))
+ self.scene_points_list = []
+ self.semantic_labels_list = []
+
+ for h5_filename in self.all_files:
+ data_batch, label_batch = self.loadh5DataFile(h5_filename)
+ self.scene_points_list.append(data_batch)
+ self.semantic_labels_list.append(label_batch)
+
+ self.data_batches = np.concatenate(self.scene_points_list, 0)
+ self.label_batches = np.concatenate(self.semantic_labels_list, 0)
+
+ test_area = 'Area_' + str(test_area)
+ train_idxs, test_idxs = [], []
+
+ for i, room_name in enumerate(self.room_filelist):
+ if test_area in room_name:
+ test_idxs.append(i)
+ else:
+ train_idxs.append(i)
+
+ assert split in ['train', 'test']
+ if split == 'train':
+ self.data_batches = self.data_batches[train_idxs, ...]
+ self.label_batches = self.label_batches[train_idxs]
+ else:
+ self.data_batches = self.data_batches[test_idxs, ...]
+ self.label_batches = self.label_batches[test_idxs]
+
+ @staticmethod
+ def getDataFiles(list_filename):
+ return [line.rstrip() for line in open(list_filename)]
+
+ @staticmethod
+ def loadh5DataFile(PathtoFile):
+ f = h5py.File(PathtoFile, 'r')
+ return f['data'][:], f['label'][:]
+
+ def __getitem__(self, index):
+ points = self.data_batches[index, :]
+ labels = self.label_batches[index].astype(np.int32)
+
+ return points, labels
+
+ def __len__(self):
+ return len(self.data_batches)
+
+
+class S3DISDataset(Dataset):
+ """Chopped Scene"""
+ def __init__(self, root, block_points=4096, split='train', test_area=5, with_rgb=True, use_weight=True,
+ block_size=1.5, padding=0.001):
+ self.npoints = block_points
+ self.block_size = block_size
+ self.padding = padding
+ self.root = root
+ self.with_rgb = with_rgb
+ self.split = split
+ assert split in ['train', 'test']
+ if self.split == 'train':
+ self.file_list = [d for d in os.listdir(root) if d.find('Area_%d' % test_area) is -1]
+ else:
+ self.file_list = [d for d in os.listdir(root) if d.find('Area_%d' % test_area) is not -1]
+ self.scene_points_list = []
+ self.semantic_labels_list = []
+
+ for file in self.file_list:
+ data = np.load(root + file)
+ self.scene_points_list.append(data[:, :6]) # (num_points, 6), xyz + rgb
+ self.semantic_labels_list.append(data[:, 6]) # (num_points, )
+
+ assert len(self.scene_points_list) == len(self.semantic_labels_list)
+ print('Number of scene: ', len(self.scene_points_list))
+
+ if split == 'train' and use_weight:
+ labelweights = np.zeros(13)
+ for seg in self.semantic_labels_list:
+ tmp, _ = np.histogram(seg, range(14))
+ labelweights += tmp
+ labelweights = labelweights.astype(np.float32)
+ labelweights = labelweights / np.sum(labelweights)
+ self.labelweights = np.power(np.amax(labelweights) / labelweights, 1 / 3.0)
+
+ # reciprocal of the # of class
+ ce_label_weight = 1 / (labelweights + 0.02)
+ self.labelweights = ce_label_weight
+
+ else:
+ self.labelweights = np.ones(13)
+
+ # just use the average weights
+ self.labelweights = np.ones(13)
+ print(self.labelweights)
+
+ def __getitem__(self, index):
+ if self.with_rgb:
+ point_set = self.scene_points_list[index]
+ point_set[:, 3:] = 2 * point_set[:, 3:] / 255.0 - 1 # normalised rgb into [-1, 1]
+ else:
+ point_set = self.scene_points_list[index][:, 0:3]
+ semantic_seg = self.semantic_labels_list[index].astype(np.int32)
+ coordmax = np.max(point_set[:, 0:3], axis=0)
+ coordmin = np.min(point_set[:, 0:3], axis=0)
+
+ isvalid = False
+ for _ in range(10):
+ curcenter = point_set[np.random.choice(len(semantic_seg), 1)[0], 0:3]
+ curmin = curcenter - [self.block_size / 2, self.block_size / 2, 1.5]
+ curmax = curcenter + [self.block_size / 2, self.block_size / 2, 1.5]
+ curmin[2], curmax[2] = coordmin[2], coordmax[2]
+ curchoice = np.sum((point_set[:, 0:3] >= (curmin - 0.2)) * (point_set[:, 0:3] <= (curmax + 0.2)),
+ axis=1) == 3
+ cur_point_set = point_set[curchoice, 0:3]
+ cur_point_full = point_set[curchoice, :]
+ cur_semantic_seg = semantic_seg[curchoice]
+ if len(cur_semantic_seg) == 0:
+ continue
+ mask = np.sum((cur_point_set >= (curmin - self.padding)) * (cur_point_set <= (curmax + self.padding)),
+ axis=1) == 3
+ vidx = np.ceil((cur_point_set[mask, :] - curmin) / (curmax - curmin) * [31.0, 31.0, 62.0])
+ vidx = np.unique(vidx[:, 0] * 31.0 * 62.0 + vidx[:, 1] * 62.0 + vidx[:, 2])
+ isvalid = len(vidx) / 31.0 / 31.0 / 62.0 >= 0.02
+ if isvalid:
+ break
+ choice = np.random.choice(len(cur_semantic_seg), self.npoints, replace=True)
+ point_set = cur_point_full[choice, :]
+ semantic_seg = cur_semantic_seg[choice]
+ mask = mask[choice]
+ sample_weight = self.labelweights[semantic_seg]
+ sample_weight *= mask
+ return point_set, semantic_seg, sample_weight
+
+ def __len__(self):
+ return len(self.scene_points_list)
+
+
+class S3DISDatasetWholeScene:
+ def __init__(self, root, block_points=8192, split='val', test_area=5, with_rgb=True, use_weight=True,
+ block_size=1.5, stride=1.5, padding=0.001):
+ self.npoints = block_points
+ self.block_size = block_size
+ self.padding = padding
+ self.stride = stride
+ self.root = root
+ self.with_rgb = with_rgb
+ self.split = split
+ assert split in ['train', 'test']
+ if self.split == 'train':
+ self.file_list = [d for d in os.listdir(root) if d.find('Area_%d' % test_area) is -1]
+ else:
+ self.file_list = [d for d in os.listdir(root) if d.find('Area_%d' % test_area) is not -1]
+ self.scene_points_list = []
+ self.semantic_labels_list = []
+ for file in self.file_list:
+ data = np.load(root + file)
+ self.scene_points_list.append(data[:, :6])
+ self.semantic_labels_list.append(data[:, 6])
+ assert len(self.scene_points_list) == len(self.semantic_labels_list)
+ print('Number of scene: ', len(self.scene_points_list))
+ if split == 'train' and use_weight:
+ labelweights = np.zeros(13)
+ for seg in self.semantic_labels_list:
+ tmp, _ = np.histogram(seg, range(14))
+ labelweights += tmp
+ labelweights = labelweights.astype(np.float32)
+ labelweights = labelweights / np.sum(labelweights)
+ self.labelweights = np.power(np.amax(labelweights) / labelweights, 1 / 3.0)
+ else:
+ self.labelweights = np.ones(13)
+
+ print(self.labelweights)
+
+ def __getitem__(self, index):
+ if self.with_rgb:
+ point_set_ini = self.scene_points_list[index]
+ point_set_ini[:, 3:] = 2 * point_set_ini[:, 3:] / 255.0 - 1
+ else:
+ point_set_ini = self.scene_points_list[index][:, 0:3]
+ semantic_seg_ini = self.semantic_labels_list[index].astype(np.int32)
+ coordmax = np.max(point_set_ini[:, 0:3], axis=0)
+ coordmin = np.min(point_set_ini[:, 0:3], axis=0)
+ nsubvolume_x = np.ceil((coordmax[0] - coordmin[0]) / self.block_size).astype(np.int32)
+ nsubvolume_y = np.ceil((coordmax[1] - coordmin[1]) / self.block_size).astype(np.int32)
+ point_sets = list()
+ semantic_segs = list()
+ sample_weights = list()
+ for i in range(nsubvolume_x):
+ for j in range(nsubvolume_y):
+ curmin = coordmin + [i * self.block_size, j * self.block_size, 0]
+ curmax = coordmin + [(i + 1) * self.block_size, (j + 1) * self.block_size, coordmax[2] - coordmin[2]]
+ curchoice = np.sum(
+ (point_set_ini[:, 0:3] >= (curmin - 0.2)) * (point_set_ini[:, 0:3] <= (curmax + 0.2)), axis=1) == 3
+ cur_point_set = point_set_ini[curchoice, 0:3]
+ cur_point_full = point_set_ini[curchoice, :]
+ cur_semantic_seg = semantic_seg_ini[curchoice]
+ if len(cur_semantic_seg) == 0:
+ continue
+ mask = np.sum((cur_point_set >= (curmin - self.padding)) * (cur_point_set <= (curmax + self.padding)),
+ axis=1) == 3
+ choice = np.random.choice(len(cur_semantic_seg), self.npoints, replace=True)
+ point_set = cur_point_full[choice, :] # Nx3/6
+ semantic_seg = cur_semantic_seg[choice] # N
+ mask = mask[choice]
+
+ sample_weight = self.labelweights[semantic_seg]
+ sample_weight *= mask # N
+ point_sets.append(np.expand_dims(point_set, 0)) # 1xNx3
+ semantic_segs.append(np.expand_dims(semantic_seg, 0)) # 1xN
+ sample_weights.append(np.expand_dims(sample_weight, 0)) # 1xN
+ point_sets = np.concatenate(tuple(point_sets), axis=0)
+ semantic_segs = np.concatenate(tuple(semantic_segs), axis=0)
+ sample_weights = np.concatenate(tuple(sample_weights), axis=0)
+ return point_sets, semantic_segs, sample_weights
+
+ def __len__(self):
+ return len(self.scene_points_list)
+
+
+class ScannetDatasetWholeScene_evaluation:
+ # prepare to give prediction on each points
+ def __init__(self, root=root, block_points=8192, split='test', test_area=5, with_rgb=True, use_weight=True,
+ stride=0.5, block_size=1.5, padding=0.001):
+ self.block_points = block_points
+ self.block_size = block_size
+ self.padding = padding
+ self.root = root
+ self.with_rgb = with_rgb
+ self.split = split
+ self.stride = stride
+ self.scene_points_num = []
+ assert split in ['train', 'test']
+ if self.split == 'train':
+ self.file_list = [d for d in os.listdir(root) if d.find('Area_%d' % test_area) is -1]
+ else:
+ self.file_list = [d for d in os.listdir(root) if d.find('Area_%d' % test_area) is not -1]
+ self.scene_points_list = []
+ self.semantic_labels_list = []
+ for file in self.file_list:
+ data = np.load(root + file)
+ self.scene_points_list.append(data[:, :6])
+ self.semantic_labels_list.append(data[:, 6])
+ assert len(self.scene_points_list) == len(self.semantic_labels_list)
+ print('Number of scene: ', len(self.scene_points_list))
+ if split == 'train' and use_weight:
+ labelweights = np.zeros(13)
+ for seg in self.semantic_labels_list:
+ tmp, _ = np.histogram(seg, range(14))
+ self.scene_points_num.append(seg.shape[0])
+ labelweights += tmp
+ labelweights = labelweights.astype(np.float32)
+ labelweights = labelweights / np.sum(labelweights)
+ self.labelweights = np.power(np.amax(labelweights) / labelweights, 1 / 3.0)
+ else:
+ self.labelweights = np.ones(13)
+ for seg in self.semantic_labels_list:
+ self.scene_points_num.append(seg.shape[0])
+
+ print(self.labelweights)
+
+ @staticmethod
+ def chunks(l, n):
+ """Yield successive n-sized chunks from l."""
+ for i in range(0, len(l), n):
+ yield l[i:i + n]
+
+ @staticmethod
+ def split_data(data, idx):
+ new_data = []
+ for i in range(len(idx)):
+ new_data += [np.expand_dims(data[idx[i]], axis=0)]
+ return new_data
+
+ @staticmethod
+ def nearest_dist(block_center, block_center_list):
+ num_blocks = len(block_center_list)
+ dist = np.zeros(num_blocks)
+ for i in range(num_blocks):
+ dist[i] = np.linalg.norm(block_center_list[i] - block_center, ord=2) # i->j
+ return np.argsort(dist)[0]
+
+ def __getitem__(self, index):
+ delta = self.stride
+ if self.with_rgb:
+ point_set_ini = self.scene_points_list[index]
+ point_set_ini[:, 3:] = 2 * point_set_ini[:, 3:] / 255.0 - 1
+ else:
+ point_set_ini = self.scene_points_list[index][:, 0:3]
+ semantic_seg_ini = self.semantic_labels_list[index].astype(np.int32)
+ coordmax = np.max(point_set_ini[:, 0:3], axis=0)
+ coordmin = np.min(point_set_ini[:, 0:3], axis=0)
+ nsubvolume_x = np.ceil((coordmax[0] - coordmin[0]) / delta).astype(np.int32)
+ nsubvolume_y = np.ceil((coordmax[1] - coordmin[1]) / delta).astype(np.int32)
+
+ point_sets, semantic_segs, sample_weights, point_idxs, block_center = [], [], [], [], []
+ for i in range(nsubvolume_x):
+ for j in range(nsubvolume_y):
+ curmin = coordmin + [i * delta, j * delta, 0]
+ curmax = curmin + [self.block_size, self.block_size, coordmax[2] - coordmin[2]]
+ curchoice = np.sum(
+ (point_set_ini[:, 0:3] >= (curmin - 0.2)) * (point_set_ini[:, 0:3] <= (curmax + 0.2)), axis=1) == 3
+ curchoice_idx = np.where(curchoice)[0]
+ cur_point_set = point_set_ini[curchoice, :]
+ cur_semantic_seg = semantic_seg_ini[curchoice]
+ if len(cur_semantic_seg) == 0:
+ continue
+ mask = np.sum((cur_point_set[:, 0:3] >= (curmin - self.padding)) * (
+ cur_point_set[:, 0:3] <= (curmax + self.padding)), axis=1) == 3
+ sample_weight = self.labelweights[cur_semantic_seg]
+ sample_weight *= mask # N
+ point_sets.append(cur_point_set) # 1xNx3/6
+ semantic_segs.append(cur_semantic_seg) # 1xN
+ sample_weights.append(sample_weight) # 1xN
+ point_idxs.append(curchoice_idx) # 1xN
+ block_center.append((curmin[0:2] + curmax[0:2]) / 2.0)
+
+ # merge small blocks
+ num_blocks = len(point_sets)
+ block_idx = 0
+ while block_idx < num_blocks:
+ if point_sets[block_idx].shape[0] > self.block_points / 2:
+ block_idx += 1
+ continue
+
+ small_block_data = point_sets[block_idx].copy()
+ small_block_seg = semantic_segs[block_idx].copy()
+ small_block_smpw = sample_weights[block_idx].copy()
+ small_block_idxs = point_idxs[block_idx].copy()
+ small_block_center = block_center[block_idx].copy()
+ point_sets.pop(block_idx)
+ semantic_segs.pop(block_idx)
+ sample_weights.pop(block_idx)
+ point_idxs.pop(block_idx)
+ block_center.pop(block_idx)
+
+ nearest_block_idx = self.nearest_dist(small_block_center, block_center)
+ point_sets[nearest_block_idx] = np.concatenate(
+ (point_sets[nearest_block_idx], small_block_data), axis=0)
+ semantic_segs[nearest_block_idx] = np.concatenate(
+ (semantic_segs[nearest_block_idx], small_block_seg), axis=0)
+ sample_weights[nearest_block_idx] = np.concatenate(
+ (sample_weights[nearest_block_idx], small_block_smpw), axis=0)
+ point_idxs[nearest_block_idx] = np.concatenate((point_idxs[nearest_block_idx], small_block_idxs), axis=0)
+ num_blocks = len(point_sets)
+
+ # divide large blocks
+ num_blocks = len(point_sets)
+ div_blocks = []
+ div_blocks_seg = []
+ div_blocks_smpw = []
+ div_blocks_idxs = []
+ div_blocks_center = []
+ for block_idx in range(num_blocks):
+ cur_num_pts = point_sets[block_idx].shape[0]
+
+ point_idx_block = np.array([x for x in range(cur_num_pts)])
+ if point_idx_block.shape[0] % self.block_points != 0:
+ makeup_num = self.block_points - point_idx_block.shape[0] % self.block_points
+ np.random.shuffle(point_idx_block)
+ point_idx_block = np.concatenate((point_idx_block, point_idx_block[0:makeup_num].copy()))
+
+ np.random.shuffle(point_idx_block)
+
+ sub_blocks = list(self.chunks(point_idx_block, self.block_points))
+
+ div_blocks += self.split_data(point_sets[block_idx], sub_blocks)
+ div_blocks_seg += self.split_data(semantic_segs[block_idx], sub_blocks)
+ div_blocks_smpw += self.split_data(sample_weights[block_idx], sub_blocks)
+ div_blocks_idxs += self.split_data(point_idxs[block_idx], sub_blocks)
+ div_blocks_center += [block_center[block_idx].copy() for _ in range(len(sub_blocks))]
+ div_blocks = np.concatenate(tuple(div_blocks), axis=0)
+ div_blocks_seg = np.concatenate(tuple(div_blocks_seg), axis=0)
+ div_blocks_smpw = np.concatenate(tuple(div_blocks_smpw), axis=0)
+ div_blocks_idxs = np.concatenate(tuple(div_blocks_idxs), axis=0)
+ return div_blocks, div_blocks_seg, div_blocks_smpw, div_blocks_idxs
+
+ def __len__(self):
+ return len(self.scene_points_list)
+
+
+if __name__ == '__main__':
+ data = S3DISDataset_HDF5()
+ for i in range(10):
+ points, labels = data[i]
+ print(points.shape)
+ print(labels.shape)
+
diff --git a/zoo/OcCo/OcCo_Torch/utils/ShapeNetDataLoader.py b/zoo/OcCo/OcCo_Torch/utils/ShapeNetDataLoader.py
new file mode 100644
index 0000000..0539614
--- /dev/null
+++ b/zoo/OcCo/OcCo_Torch/utils/ShapeNetDataLoader.py
@@ -0,0 +1,258 @@
+# Copyright (c) 2020. Author: Hanchen Wang, hw501@cam.ac.uk
+# Ref: https://github.com/yanx27/Pointnet_Pointnet2_pytorch/blob/master/data_utils/ShapeNetDataLoader.py
+import os, json, torch, warnings, numpy as np
+# from PC_Augmentation import pc_normalize
+from torch.utils.data import Dataset
+import glob
+import h5py
+warnings.filterwarnings('ignore')
+
+
+# class PartNormalDataset(Dataset):
+# """
+# Data Source: https://shapenet.cs.stanford.edu/media/shapenetcore_partanno_segmentation_benchmark_v0_normal.zip
+# """
+# def __init__(self, root, num_point=2048, split='train', use_normal=False):
+# self.catfile = os.path.join(root, 'synsetoffset2category.txt')
+# self.use_normal = use_normal
+# self.num_point = num_point
+# self.cache_size = 20000
+# self.datapath = []
+# self.root = root
+# self.cache = {}
+# self.meta = {}
+# self.cat = {}
+
+# with open(self.catfile, 'r') as f:
+# for line in f:
+# ls = line.strip().split()
+# self.cat[ls[0]] = ls[1]
+# # self.cat -> {'class name': syn_id, ...}
+# # self.meta -> {'class name': file list, ...}
+# # self.classes -> {'class name': class id, ...}
+# # self.datapath -> [('class name', single file) , ...]
+# self.classes = dict(zip(self.cat, range(len(self.cat))))
+
+# train_ids = self.read_fns(os.path.join(self.root, 'train_test_split', 'shuffled_train_file_list.json'))
+# test_ids = self.read_fns(os.path.join(self.root, 'train_test_split', 'shuffled_test_file_list.json'))
+# val_ids = self.read_fns(os.path.join(self.root, 'train_test_split', 'shuffled_val_file_list.json'))
+
+# for item in self.cat:
+# dir_point = os.path.join(self.root, self.cat[item])
+# fns = sorted(os.listdir(dir_point))
+# self.meta[item] = []
+
+# if split is 'trainval':
+# fns = [fn for fn in fns if ((fn[0:-4] in train_ids) or (fn[0:-4] in val_ids))]
+# elif split is 'test':
+# fns = [fn for fn in fns if fn[0:-4] in test_ids]
+# else:
+# print('Unknown split: %s [Option: ]. Exiting...' % split)
+# exit(-1)
+
+# for fn in fns:
+# token = (os.path.splitext(os.path.basename(fn))[0])
+# self.meta[item].append(os.path.join(dir_point, token + '.txt'))
+
+# for item in self.cat:
+# for fn in self.meta[item]:
+# self.datapath.append((item, fn))
+
+# self.seg_classes = {'Earphone': [16, 17, 18], 'Motorbike': [30, 31, 32, 33, 34, 35],
+# 'Rocket': [41, 42, 43], 'Car': [8, 9, 10, 11], 'Laptop': [28, 29],
+# 'Cap': [6, 7], 'Skateboard': [44, 45, 46], 'Lamp': [24, 25, 26, 27],
+# 'Mug': [36, 37], 'Guitar': [19, 20, 21], 'Bag': [4, 5], 'Knife': [22, 23],
+# 'Table': [47, 48, 49], 'Airplane': [0, 1, 2, 3], 'Pistol': [38, 39, 40],
+# 'Chair': [12, 13, 14, 15]}
+
+# @staticmethod
+# def read_fns(path):
+# with open(path, 'r') as file:
+# ids = set([str(d.split('/')[2]) for d in json.load(file)])
+# return ids
+
+# def __getitem__(self, index):
+# if index in self.cache:
+# pts, cls, seg = self.cache[index]
+# else:
+# fn = self.datapath[index]
+# cat, pt = fn[0], np.loadtxt(fn[1]).astype(np.float32)
+# cls = np.array([self.classes[cat]]).astype(np.int32)
+# pts = pt[:, :6] if self.use_normal else pt[:, :3]
+# seg = pt[:, -1].astype(np.int32)
+# if len(self.cache) < self.cache_size:
+# self.cache[index] = (pts, cls, seg)
+
+# choice = np.random.choice(len(seg), self.num_point, replace=True)
+# pts[:, 0:3] = pc_normalize(pts[:, 0:3])
+# pts, seg = pts[choice, :], seg[choice]
+
+# return pts, cls, seg
+
+# def __len__(self):
+# return len(self.datapath)
+
+
+
+class ShapeNetPart(Dataset):
+ def __init__(self, num_points=2048, partition='train', class_choice=None, sub=None):
+ self.data, self.label, self.seg = load_data_partseg(partition, sub)
+ self.cat2id = {
+ 'airplane': 0, 'bag': 1, 'cap': 2, 'car': 3, 'chair': 4,
+ 'earphone': 5, 'guitar': 6, 'knife': 7, 'lamp': 8, 'laptop': 9,
+ 'motor': 10, 'mug': 11, 'pistol': 12, 'rocket': 13, 'skateboard': 14, 'table': 15
+ }
+ self.seg_num = [4, 2, 2, 4, 4, 3, 3, 2, 4, 2, 6, 2, 3, 3, 3, 3]
+ self.index_start = [0, 4, 6, 8, 12, 16, 19, 22, 24, 28, 30, 36, 38, 41, 44, 47]
+ self.num_points = num_points
+ self.partition = partition
+ self.class_choice = class_choice
+
+ if self.class_choice != None:
+ id_choice = self.cat2id[self.class_choice]
+ indices = (self.label == id_choice).squeeze()
+ self.data = self.data[indices]
+ self.label = self.label[indices]
+ self.seg = self.seg[indices]
+ self.seg_num_all = self.seg_num[id_choice]
+ self.seg_start_index = self.index_start[id_choice]
+ else:
+ self.seg_num_all = 50
+ self.seg_start_index = 0
+
+ def __getitem__(self, item):
+ pointcloud = self.data[item][:self.num_points]
+ label = self.label[item]
+ seg = self.seg[item][:self.num_points] # part seg label
+ if self.partition == 'trainval':
+ pointcloud = translate_pointcloud(pointcloud)
+ indices = list(range(pointcloud.shape[0]))
+ np.random.shuffle(indices)
+ pointcloud = pointcloud[indices]
+ seg = seg[indices]
+ return pointcloud, label, seg
+
+ def __len__(self):
+ return self.data.shape[0]
+
+
+class ShapeNetC(Dataset):
+ def __init__(self, partition='train', class_choice=None, sub=None):
+ self.data, self.label, self.seg = load_data_partseg(partition, sub)
+ self.cat2id = {
+ 'airplane': 0, 'bag': 1, 'cap': 2, 'car': 3, 'chair': 4,
+ 'earphone': 5, 'guitar': 6, 'knife': 7, 'lamp': 8, 'laptop': 9,
+ 'motor': 10, 'mug': 11, 'pistol': 12, 'rocket': 13, 'skateboard': 14, 'table': 15
+ }
+ self.seg_num = [4, 2, 2, 4, 4, 3, 3, 2, 4, 2, 6, 2, 3, 3, 3, 3] # number of parts for each category
+ self.index_start = [0, 4, 6, 8, 12, 16, 19, 22, 24, 28, 30, 36, 38, 41, 44, 47]
+ self.partition = partition
+ self.class_choice = class_choice
+
+ if self.class_choice != None:
+ id_choice = self.cat2id[self.class_choice]
+ indices = (self.label == id_choice).squeeze()
+ self.data = self.data[indices]
+ self.label = self.label[indices]
+ self.seg = self.seg[indices]
+ self.seg_num_all = self.seg_num[id_choice]
+ self.seg_start_index = self.index_start[id_choice]
+ else:
+ self.seg_num_all = 50
+ self.seg_start_index = 0
+
+ def __getitem__(self, item):
+ pointcloud = self.data[item]
+ label = self.label[item]
+ seg = self.seg[item] # part seg label
+ if self.partition == 'trainval':
+ pointcloud = translate_pointcloud(pointcloud)
+ indices = list(range(pointcloud.shape[0]))
+ np.random.shuffle(indices)
+ pointcloud = pointcloud[indices]
+ seg = seg[indices]
+ return pointcloud, label, seg
+
+ def __len__(self):
+ return self.data.shape[0]
+
+
+
+DATA_DIR = '/mnt/lustre/share/ldkong/data/sets/ShapeNetPart'
+SHAPENET_C_DIR = '/mnt/lustre/share/jwren/to_kld/shapenet_c'
+
+def load_data_partseg(partition, sub=None):
+ all_data = []
+ all_label = []
+ all_seg = []
+ if partition == 'trainval':
+ file = glob.glob(os.path.join(DATA_DIR, 'shapenet_part_seg_hdf5_data', 'hdf5_data', '*train*.h5')) \
+ + glob.glob(os.path.join(DATA_DIR, 'shapenet_part_seg_hdf5_data', 'hdf5_data', '*val*.h5'))
+ elif partition == 'shapenet-c':
+ file = os.path.join(SHAPENET_C_DIR, '%s.h5'%sub)
+ else:
+ file = glob.glob(os.path.join(DATA_DIR, 'shapenet_part_seg_hdf5_data', 'hdf5_data', '*%s*.h5'%partition))
+
+
+ if partition == 'shapenet-c':
+ # for h5_name in file:
+ # f = h5py.File(h5_name, 'r+')
+ f = h5py.File(file, 'r')
+ data = f['data'][:].astype('float32')
+ label = f['label'][:].astype('int64')
+ seg = f['pid'][:].astype('int64') # part seg label
+ f.close()
+ all_data.append(data)
+ all_label.append(label)
+ all_seg.append(seg)
+
+ else:
+ for h5_name in file:
+ f = h5py.File(h5_name, 'r+')
+ data = f['data'][:].astype('float32')
+ label = f['label'][:].astype('int64')
+ seg = f['pid'][:].astype('int64')
+ f.close()
+ all_data.append(data)
+ all_label.append(label)
+ all_seg.append(seg)
+
+
+ all_data = np.concatenate(all_data, axis=0)
+ all_label = np.concatenate(all_label, axis=0)
+ all_seg = np.concatenate(all_seg, axis=0)
+ return all_data, all_label, all_seg
+
+
+def translate_pointcloud(pointcloud):
+ xyz1 = np.random.uniform(low=2./3., high=3./2., size=[3])
+ xyz2 = np.random.uniform(low=-0.2, high=0.2, size=[3])
+
+ translated_pointcloud = np.add(np.multiply(pointcloud, xyz1), xyz2).astype('float32')
+ return translated_pointcloud
+
+
+def jitter_pointcloud(pointcloud, sigma=0.01, clip=0.02):
+ N, C = pointcloud.shape
+ pointcloud += np.clip(sigma * np.random.randn(N, C), -1*clip, clip)
+ return pointcloud
+
+
+def rotate_pointcloud(pointcloud):
+ theta = np.pi*2 * np.random.uniform()
+ rotation_matrix = np.array([[np.cos(theta), -np.sin(theta)],[np.sin(theta), np.cos(theta)]])
+ pointcloud[:,[0,2]] = pointcloud[:,[0,2]].dot(rotation_matrix) # random rotation (x,z)
+ return pointcloud
+
+
+
+
+# if __name__ == "__main__":
+
+# root = '../data/shapenetcore_partanno_segmentation_benchmark_v0_normal/'
+# TRAIN_DATASET = PartNormalDataset(root=root, num_point=2048, split='trainval', use_normal=False)
+# trainDataLoader = torch.utils.data.DataLoader(TRAIN_DATASET, batch_size=24, shuffle=True, num_workers=4)
+
+# for i, data in enumerate(trainDataLoader):
+# points, label, target = data
+
diff --git a/zoo/OcCo/OcCo_Torch/utils/TSNE_Visu.py b/zoo/OcCo/OcCo_Torch/utils/TSNE_Visu.py
new file mode 100644
index 0000000..e3008b0
--- /dev/null
+++ b/zoo/OcCo/OcCo_Torch/utils/TSNE_Visu.py
@@ -0,0 +1,82 @@
+# Copyright (c) 2020. Hanchen Wang, hw501@cam.ac.uk
+# Ref: https://scikit-learn.org/stable/modules/generated/sklearn.manifold.TSNE.html
+
+import os, sys, torch, argparse, importlib, numpy as np, matplotlib.pyplot as plt
+sys.path.append('../')
+sys.path.append('../models')
+from ModelNetDataLoader import General_CLSDataLoader_HDF5
+from Torch_Utility import copy_parameters
+from torch.utils.data import DataLoader
+from Dataset_Loc import Dataset_Loc
+from sklearn.manifold import TSNE
+from tqdm import tqdm
+
+
+def parse_args():
+ parser = argparse.ArgumentParser('SVM on Point Cloud Classification')
+
+ ''' === Network Model === '''
+ parser.add_argument('--gpu', type=str, default='0', help='GPU [default: 0]')
+ parser.add_argument('--model', default='pcn_util', help='model [default: pcn_util]')
+ parser.add_argument('--batch_size', type=int, default=24, help='batch size [default: 24]')
+ parser.add_argument('--restore_path', type=str, help="path to pretrained weights [default: None]")
+
+ ''' === Dataset === '''
+ parser.add_argument('--partial', action='store_true', help='partial objects [default: False]')
+ parser.add_argument('--bn', action='store_true', help='with background noise [default: False]')
+ parser.add_argument('--dataset', type=str, default='modelnet40', help='dataset [default: modelnet40]')
+ parser.add_argument('--fname', type=str, help='filename, used in ScanObjectNN or fewer data [default:]')
+
+ return parser.parse_args()
+
+
+if __name__ == "__main__":
+ args = parse_args()
+
+ os.environ["CUDA_DEVICE_ORDER"] = "PCI_BUS_ID"
+ os.environ["CUDA_VISIBLE_DEVICES"] = args.gpu
+
+ NUM_CLASSES, TRAIN_FILES, TEST_FILES = Dataset_Loc(dataset=args.dataset, fname=args.fname,
+ partial=args.partial, bn=args.bn)
+ TRAIN_DATASET = General_CLSDataLoader_HDF5(file_list=TRAIN_FILES)
+ # TEST_DATASET = General_CLSDataLoader_HDF5(file_list=TEST_FILES)
+ trainDataLoader = DataLoader(TRAIN_DATASET, batch_size=args.batch_size, shuffle=True, num_workers=4)
+ # testDataLoader = DataLoader(TEST_DATASET, batch_size=args.batch_size, shuffle=True, num_workers=4)
+
+ MODEL = importlib.import_module(args.model)
+ device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
+ encoder = MODEL.encoder(args=args, num_channel=3).to(device)
+ encoder = torch.nn.DataParallel(encoder)
+
+ checkpoint = torch.load(args.restore_path)
+ encoder = copy_parameters(encoder, checkpoint, verbose=True)
+
+ X_train, y_train, X_test, y_test = [], [], [], []
+ with torch.no_grad():
+ encoder.eval()
+
+ for points, target in tqdm(trainDataLoader, total=len(trainDataLoader), smoothing=0.9):
+ points, target = points.float().transpose(2, 1).cuda(), target.long().cuda()
+ feats = encoder(points)
+ X_train.append(feats.cpu().numpy())
+ y_train.append(target.cpu().numpy())
+
+ # for points, target in tqdm(testDataLoader, total=len(testDataLoader), smoothing=0.9):
+ # points, target = points.float().transpose(2, 1).cuda(), target.long().cuda()
+ # feats = encoder(points)
+ # X_test.append(feats.cpu().numpy())
+ # y_test.append(target.cpu().numpy())
+
+ X_train, y_train = np.concatenate(X_train), np.concatenate(y_train)
+ # X_test, y_test = np.concatenate(X_test), np.concatenate(y_test)
+
+ # In general, larger dataset/num of class require larger perplexity
+ X_embedded = TSNE(n_components=2, perplexity=100).fit_transform(X_train)
+
+ plt.figure(figsize=(16, 16))
+ plt.scatter(X_embedded[:, 0], X_embedded[:, 1], c=y_train, cmap=plt.cm.get_cmap("jet", NUM_CLASSES))
+ plt.colorbar(ticks=range(1, NUM_CLASSES + 1))
+ plt.clim(0.5, NUM_CLASSES + 0.5)
+ # plt.savefig('log/tsne/tsne_shapenet10_pcn.pdf')
+ plt.show()
+
diff --git a/zoo/OcCo/OcCo_Torch/utils/Torch_Utility.py b/zoo/OcCo/OcCo_Torch/utils/Torch_Utility.py
new file mode 100644
index 0000000..a7be474
--- /dev/null
+++ b/zoo/OcCo/OcCo_Torch/utils/Torch_Utility.py
@@ -0,0 +1,50 @@
+# Copyright (c) 2020. Hanchen Wang, hw501@cam.ac.uk
+# Ref: https://github.com/pytorch/pytorch/issues/7068#issuecomment-487907668
+import torch, os, random, numpy as np
+
+
+def seed_torch(seed=1029):
+ random.seed(seed)
+ os.environ['PYTHONHASHSEED'] = str(seed)
+ np.random.seed(seed)
+ torch.manual_seed(seed)
+ torch.cuda.manual_seed(seed)
+ torch.cuda.manual_seed_all(seed) # for multi-GPU Usage
+ torch.backends.cudnn.benchmark = False
+ torch.backends.cudnn.deterministic = True
+
+
+def copy_parameters(model, pretrained, verbose=True):
+ # ref: https://discuss.pytorch.org/t/how-to-load-part-of-pre-trained-model/1113/3
+
+ model_dict = model.state_dict()
+ pretrained_dict = pretrained['model_state_dict']
+ pretrained_dict = {k: v for k, v in pretrained_dict.items() if
+ k in model_dict and pretrained_dict[k].size() == model_dict[k].size()}
+
+ if verbose:
+ print('=' * 27)
+ print('Restored Params and Shapes:')
+ for k, v in pretrained_dict.items():
+ print(k, ': ', v.size())
+ print('=' * 68)
+ model_dict.update(pretrained_dict)
+ model.load_state_dict(model_dict)
+ return model
+
+
+def weights_init(m):
+ """
+ Xavier normal initialisation for weights and zero bias,
+ find especially useful for completion and segmentation Tasks
+ """
+ classname = m.__class__.__name__
+ if (classname.find('Conv1d') != -1) or (classname.find('Conv2d') != -1) or (classname.find('Linear') != -1):
+ torch.nn.init.xavier_normal_(m.weight.data)
+ if m.bias is not None:
+ torch.nn.init.constant_(m.bias.data, 0.0)
+
+
+def bn_momentum_adjust(m, momentum):
+ if isinstance(m, torch.nn.BatchNorm2d) or isinstance(m, torch.nn.BatchNorm1d):
+ m.momentum = momentum
diff --git a/zoo/OcCo/OcCo_Torch/utils/TrainLogger.py b/zoo/OcCo/OcCo_Torch/utils/TrainLogger.py
new file mode 100644
index 0000000..4b06e2b
--- /dev/null
+++ b/zoo/OcCo/OcCo_Torch/utils/TrainLogger.py
@@ -0,0 +1,159 @@
+# Copyright (c) 2020. Hanchen Wang, hw501@cam.ac.uk
+
+import os, logging, datetime, numpy as np, sklearn.metrics as metrics
+from pathlib import Path
+
+
+class TrainLogger:
+
+ def __init__(self, args, name='model', subfold='cls', filename='train_log', cls2name=None):
+ self.step = 1
+ self.epoch = 1
+ self.args = args
+ self.name = name
+ self.sf = subfold
+ self.mkdir()
+ self.setup(filename=filename)
+ self.epoch_init()
+ self.save_model = False
+ self.cls2name = cls2name
+ self.best_instance_acc, self.best_class_acc, self.best_miou = 0., 0., 0.
+ self.best_instance_epoch, self.best_class_epoch, self.best_miou_epoch = 0, 0, 0
+ self.savepath = str(self.checkpoints_dir) + '/best_model.pth'
+
+ def setup(self, filename='train_log'):
+ self.logger = logging.getLogger(self.name)
+ self.logger.setLevel(logging.INFO)
+ formatter = logging.Formatter('%(asctime)s - %(name)s - %(levelname)s - %(message)s')
+ file_handler = logging.FileHandler(os.path.join(self.log_dir, filename + '.txt'))
+ file_handler.setLevel(logging.INFO)
+ file_handler.setFormatter(formatter)
+ # ref: https://stackoverflow.com/a/53496263/12525201
+ # define a Handler which writes INFO messages or higher to the sys.stderr
+ console = logging.StreamHandler()
+ console.setLevel(logging.INFO)
+ # logging.getLogger('').addHandler(console) # this is root logger
+ self.logger.addHandler(console)
+ self.logger.addHandler(file_handler)
+ self.logger.info('PARAMETER ...')
+ self.logger.info(self.args)
+ self.logger.removeHandler(console)
+
+ def mkdir(self):
+ timestr = str(datetime.datetime.now().strftime('%Y-%m-%d_%H-%M'))
+ experiment_dir = Path('./log/')
+ experiment_dir.mkdir(exist_ok=True)
+ experiment_dir = experiment_dir.joinpath(self.sf)
+ experiment_dir.mkdir(exist_ok=True)
+
+ if self.args.log_dir is None:
+ self.experiment_dir = experiment_dir.joinpath(timestr)
+ else:
+ self.experiment_dir = experiment_dir.joinpath(self.args.log_dir)
+
+ self.experiment_dir.mkdir(exist_ok=True)
+ self.checkpoints_dir = self.experiment_dir.joinpath('checkpoints/')
+ self.checkpoints_dir.mkdir(exist_ok=True)
+ self.log_dir = self.experiment_dir.joinpath('logs/')
+ self.log_dir.mkdir(exist_ok=True)
+
+ # @property.setter
+ def epoch_init(self, training=True):
+ self.loss, self.count, self.pred, self.gt = 0., 0., [], []
+ if training:
+ self.logger.info('Epoch %d/%d:' % (self.epoch, self.args.epoch))
+
+ def step_update(self, pred, gt, loss, training=True):
+ if training:
+ self.step += 1 # Use TensorFlow way to count training steps
+ self.gt.append(gt)
+ self.pred.append(pred)
+ batch_size = len(pred)
+ self.count += batch_size
+ self.loss += loss * batch_size
+
+ def epoch_update(self, training=True, mode='cls'):
+ self.save_model = False
+ self.gt = np.concatenate(self.gt)
+ self.pred = np.concatenate(self.pred)
+
+ instance_acc = metrics.accuracy_score(self.gt, self.pred)
+ if instance_acc > self.best_instance_acc and not training:
+ self.save_model = True if mode == 'cls' else False
+ self.best_instance_acc = instance_acc
+ self.best_instance_epoch = self.epoch
+
+ if mode == 'cls':
+ class_acc = metrics.balanced_accuracy_score(self.gt, self.pred)
+ if class_acc > self.best_class_acc and not training:
+ self.best_class_epoch = self.epoch
+ self.best_class_acc = class_acc
+ return instance_acc, class_acc
+ elif mode == 'partseg':
+ miou = self.calculate_IoU().mean()
+ if miou > self.best_miou and not training:
+ self.best_miou_epoch = self.epoch
+ self.save_model = True
+ self.best_miou = miou
+ return instance_acc, miou
+ else:
+ raise ValueError('Mode is not Supported by TrainLogger')
+
+ def epoch_summary(self, writer=None, training=True, mode='cls'):
+ criteria = 'Class Accuracy' if mode == 'cls' else 'mIoU'
+ instance_acc, class_acc = self.epoch_update(training=training, mode=mode)
+ if training:
+ if writer is not None:
+ writer.add_scalar('Train Instance Accuracy', instance_acc, self.step)
+ writer.add_scalar('Train %s' % criteria, class_acc, self.step)
+ self.logger.info('Train Instance Accuracy: %.3f' % instance_acc)
+ self.logger.info('Train %s: %.3f' % (criteria, class_acc))
+ else:
+ if writer is not None:
+ writer.add_scalar('Test Instance Accuracy', instance_acc, self.step)
+ writer.add_scalar('Test %s' % criteria, class_acc, self.step)
+ self.logger.info('Test Instance Accuracy: %.3f' % instance_acc)
+ self.logger.info('Test %s: %.3f' % (criteria, class_acc))
+ self.logger.info('Best Instance Accuracy: %.3f at Epoch %d ' % (
+ self.best_instance_acc, self.best_instance_epoch))
+ if self.best_class_acc > .1:
+ self.logger.info('Best Class Accuracy: %.3f at Epoch %d' % (
+ self.best_class_acc, self.best_class_epoch))
+ if self.best_miou > .1:
+ self.logger.info('Best mIoU: %.3f at Epoch %d' % (
+ self.best_miou, self.best_miou_epoch))
+
+ self.epoch += 1 if not training else 0
+ if self.save_model:
+ self.logger.info('Saving the Model Params to %s' % self.savepath)
+
+ def calculate_IoU(self):
+ num_class = len(self.cls2name)
+ Intersection = np.zeros(num_class)
+ Union = Intersection.copy()
+ # self.pred -> numpy.ndarray (total predictions, )
+
+ for sem_idx in range(num_class):
+ Intersection[sem_idx] = np.sum(np.logical_and(self.pred == sem_idx, self.gt == sem_idx))
+ Union[sem_idx] = np.sum(np.logical_or(self.pred == sem_idx, self.gt == sem_idx))
+ return Intersection / Union
+
+ def train_summary(self, mode='cls'):
+ self.logger.info('\n\nEnd of Training...')
+ self.logger.info('Best Instance Accuracy: %.3f at Epoch %d ' % (
+ self.best_instance_acc, self.best_instance_epoch))
+ if mode == 'cls':
+ self.logger.info('Best Class Accuracy: %.3f at Epoch %d' % (
+ self.best_class_acc, self.best_class_epoch))
+ elif mode == 'partseg':
+ self.logger.info('Best mIoU: %.3f at Epoch %d' % (
+ self.best_miou, self.best_miou_epoch))
+
+ def update_from_checkpoints(self, checkpoint):
+ self.logger.info('Use Pre-Trained Weights')
+ self.step = checkpoint['step']
+ self.epoch = checkpoint['epoch']
+ self.best_instance_epoch, self.best_instance_acc = checkpoint['epoch'], checkpoint['instance_acc']
+ self.best_class_epoch, self.best_class_acc = checkpoint['best_class_epoch'], checkpoint['best_class_acc']
+ self.logger.info('Best Class Acc {:.3f} at Epoch {}'.format(self.best_instance_acc, self.best_class_epoch))
+ self.logger.info('Best Instance Acc {:.3f} at Epoch {}'.format(self.best_instance_acc, self.best_instance_epoch))
diff --git a/zoo/OcCo/OcCo_Torch/utils/Visu_Utility.py b/zoo/OcCo/OcCo_Torch/utils/Visu_Utility.py
new file mode 100644
index 0000000..dae23f7
--- /dev/null
+++ b/zoo/OcCo/OcCo_Torch/utils/Visu_Utility.py
@@ -0,0 +1,46 @@
+# Copyright (c) 2020. Author: Hanchen Wang, hw501@cam.ac.uk
+# Ref: https://github.com/wentaoyuan/pcn/blob/master/visu_util.py
+
+# uncomment following commands if you have saving issues
+# ref: https://stackoverflow.com/questions/13336823/matplotlib-python-error
+# import matplotlib
+# matplotlib.use('Agg')
+from matplotlib import pyplot as plt
+from mpl_toolkits.mplot3d import Axes3D
+
+
+def plot_pcd_three_views(filename, pcds, titles, suptitle='', sizes=None, cmap='viridis', zdir='y',
+ xlim=(-0.3, 0.3), ylim=(-0.3, 0.3), zlim=(-0.3, 0.3)):
+ if sizes is None:
+ sizes = [0.5 for _ in range(len(pcds))]
+ fig = plt.figure(figsize=(len(pcds) * 3, 9))
+ for i in range(3):
+ elev = 30
+ azim = -45 + 90 * i
+ for j, (pcd, size) in enumerate(zip(pcds, sizes)):
+ color = pcd[:, 0]
+ ax = fig.add_subplot(3, len(pcds), i * len(pcds) + j + 1, projection='3d')
+ ax.view_init(elev, azim)
+ ax.scatter(pcd[:, 0], pcd[:, 1], pcd[:, 2], zdir=zdir, c=color, s=size, cmap=cmap, vmin=-1, vmax=0.5)
+ ax.set_title(titles[j])
+ ax.set_axis_off()
+ ax.set_xlim(xlim)
+ ax.set_ylim(ylim)
+ ax.set_zlim(zlim)
+ plt.subplots_adjust(left=0.05, right=0.95, bottom=0.05, top=0.9, wspace=0.1, hspace=0.1)
+ plt.suptitle(suptitle)
+ fig.savefig(filename)
+ plt.close(fig)
+
+
+if __name__ == "__main__":
+ pass
+ # filenames = ['airplane.pcd', 'car.pcd', 'chair.pcd', 'lamp.pcd'] # '../demo_data'
+ # for file in filenames:
+ # filename = file.replace('.pcd', '')
+ # pcds = [np.asarray(read_point_cloud('../demo_data/' + file).points)]
+ # # pdb.set_trace()
+ # titles = ['viewpoint 1', 'viewpoint 2', 'viewpoint 3']
+ # plot_pcd_three_views(s
+ # filename, pcds, titles, suptitle=filename, sizes=None, cmap='viridis', zdir='y',
+ # xlim=(-0.3, 0.3), ylim=(-0.3, 0.3), zlim=(-0.3, 0.3))
diff --git a/zoo/OcCo/OcCo_Torch/utils/__init__.py b/zoo/OcCo/OcCo_Torch/utils/__init__.py
new file mode 100644
index 0000000..741971b
--- /dev/null
+++ b/zoo/OcCo/OcCo_Torch/utils/__init__.py
@@ -0,0 +1,2 @@
+# Copyright (c) 2020. Hanchen Wang, hw501@cam.ac.uk
+
diff --git a/zoo/OcCo/OcCo_Torch/utils/collect_indoor3d_data.py b/zoo/OcCo/OcCo_Torch/utils/collect_indoor3d_data.py
new file mode 100644
index 0000000..4de17f4
--- /dev/null
+++ b/zoo/OcCo/OcCo_Torch/utils/collect_indoor3d_data.py
@@ -0,0 +1,26 @@
+# Ref: https://github.com/charlesq34/pointnet/blob/master/sem_seg/collect_indoor3d_data.py
+
+import os, sys, indoor3d_util
+BASE_DIR = os.path.dirname(os.path.abspath(__file__))
+ROOT_DIR = os.path.dirname(BASE_DIR)
+sys.path.append(BASE_DIR)
+
+anno_paths = [line.rstrip() for line in open(os.path.join(BASE_DIR, 'meta/anno_paths.txt'))]
+anno_paths = [os.path.join(indoor3d_util.DATA_PATH, p) for p in anno_paths]
+
+output_folder = os.path.join(ROOT_DIR, 'data/stanford_indoor3d')
+# output_folder = os.path.join('../data/stanford_indoor3d')
+if not os.path.exists(output_folder):
+ os.mkdir(output_folder)
+
+# Note: there is an extra character in the v1.2 data in Area_5/hallway_6. It's fixed manually.
+# Ref: https://github.com/charlesq34/pointnet/issues/45
+for anno_path in anno_paths:
+ print(anno_path)
+ try:
+ elements = anno_path.split('/')
+ out_filename = elements[-3]+'_'+elements[-2]+'.npy' # e.g., Area_1_hallway_1.npy
+ indoor3d_util.collect_point_label(
+ anno_path, os.path.join(output_folder, out_filename), 'numpy')
+ except:
+ print(anno_path, 'ERROR!!')
diff --git a/zoo/OcCo/OcCo_Torch/utils/gen_indoor3d_h5.py b/zoo/OcCo/OcCo_Torch/utils/gen_indoor3d_h5.py
new file mode 100644
index 0000000..e51c446
--- /dev/null
+++ b/zoo/OcCo/OcCo_Torch/utils/gen_indoor3d_h5.py
@@ -0,0 +1,98 @@
+# Ref: https://github.com/charlesq34/pointnet/blob/master/sem_seg/gen_indoor3d_h5.py
+
+import os, sys, h5py, indoor3d_util, numpy as np
+
+BASE_DIR = os.path.dirname(os.path.abspath(__file__))
+ROOT_DIR = os.path.dirname(BASE_DIR)
+sys.path.append(BASE_DIR)
+sys.path.append(os.path.join(ROOT_DIR, 'utils'))
+data_dir = os.path.join(ROOT_DIR, 'data')
+indoor3d_data_dir = os.path.join(data_dir, 'stanford_indoor3d')
+NUM_POINT = 4096
+H5_BATCH_SIZE = 1000
+data_dim = [NUM_POINT, 9]
+label_dim = [NUM_POINT]
+data_dtype = 'float32'
+label_dtype = 'uint8'
+
+# Set paths
+filelist = os.path.join(BASE_DIR, 'meta/all_data_label.txt')
+data_label_files = [os.path.join(indoor3d_data_dir, line.rstrip()) for line in open(filelist)]
+output_dir = os.path.join(data_dir, 'indoor3d_sem_seg_hdf5_data')
+if not os.path.exists(output_dir):
+ os.mkdir(output_dir)
+output_filename_prefix = os.path.join(output_dir, 'ply_data_all')
+output_room_filelist = os.path.join(output_dir, 'room_filelist.txt')
+fout_room = open(output_room_filelist, 'w')
+
+# --------------------------------------
+# ----- BATCH WRITE TO HDF5 -----
+# --------------------------------------
+batch_data_dim = [H5_BATCH_SIZE] + data_dim
+batch_label_dim = [H5_BATCH_SIZE] + label_dim
+h5_batch_data = np.zeros(batch_data_dim, dtype=np.float32)
+h5_batch_label = np.zeros(batch_label_dim, dtype=np.uint8)
+buffer_size = 0 # state: record how many samples are currently in buffer
+h5_index = 0 # state: the next h5 file to save
+
+
+def insert_batch(data, label, last_batch=False):
+ global h5_batch_data, h5_batch_label
+ global buffer_size, h5_index
+
+ def save_h5(h5_filename, data, label, data_dtype='uint8', label_dtype='uint8'):
+ h5_fout = h5py.File(h5_filename)
+ h5_fout.create_dataset(
+ name='data', data=data,
+ compression='gzip', compression_opts=4,
+ dtype=data_dtype)
+ h5_fout.create_dataset(
+ name='label', data=label,
+ compression='gzip', compression_opts=1,
+ dtype=label_dtype)
+ h5_fout.close()
+
+ data_size = data.shape[0]
+ # If there is enough space, just insert
+ if buffer_size + data_size <= h5_batch_data.shape[0]:
+ h5_batch_data[buffer_size:buffer_size + data_size, ...] = data
+ h5_batch_label[buffer_size:buffer_size + data_size] = label
+ buffer_size += data_size
+ else: # not enough space
+ capacity = h5_batch_data.shape[0] - buffer_size
+ assert (capacity >= 0)
+ if capacity > 0:
+ h5_batch_data[buffer_size:buffer_size + capacity, ...] = data[0:capacity, ...]
+ h5_batch_label[buffer_size:buffer_size + capacity, ...] = label[0:capacity, ...]
+ # Save batch data and label to h5 file, reset buffer_size
+ h5_filename = output_filename_prefix + '_' + str(h5_index) + '.h5'
+ save_h5(h5_filename, h5_batch_data, h5_batch_label, data_dtype, label_dtype)
+ print('Stored {0} with size {1}'.format(h5_filename, h5_batch_data.shape[0]))
+ h5_index += 1
+ buffer_size = 0
+ # recursive call
+ insert_batch(data[capacity:, ...], label[capacity:, ...], last_batch)
+ if last_batch and buffer_size > 0:
+ h5_filename = output_filename_prefix + '_' + str(h5_index) + '.h5'
+ save_h5(h5_filename, h5_batch_data[0:buffer_size, ...],
+ h5_batch_label[0:buffer_size, ...], data_dtype, label_dtype)
+ print('Stored {0} with size {1}'.format(h5_filename, buffer_size))
+ h5_index += 1
+ buffer_size = 0
+ return
+
+
+sample_cnt = 0
+for i, data_label_filename in enumerate(data_label_files):
+ print(data_label_filename)
+ data, label = indoor3d_util.room2blocks_wrapper_normalized(
+ data_label_filename, NUM_POINT, block_size=1.0, stride=0.5, random_sample=False, sample_num=None)
+ print('{0}, {1}'.format(data.shape, label.shape))
+ for _ in range(data.shape[0]):
+ fout_room.write(os.path.basename(data_label_filename)[0:-4] + '\n')
+
+ sample_cnt += data.shape[0]
+ insert_batch(data, label, i == len(data_label_files) - 1)
+
+fout_room.close()
+print("Total samples: {0}".format(sample_cnt))
diff --git a/zoo/OcCo/OcCo_Torch/utils/indoor3d_util.py b/zoo/OcCo/OcCo_Torch/utils/indoor3d_util.py
new file mode 100644
index 0000000..b6acbf5
--- /dev/null
+++ b/zoo/OcCo/OcCo_Torch/utils/indoor3d_util.py
@@ -0,0 +1,606 @@
+# Ref: https://github.com/charlesq34/pointnet/blob/master/sem_seg/indoor3d_util.py
+import os, sys, glob, numpy as np
+
+BASE_DIR = os.path.dirname(os.path.abspath(__file__))
+ROOT_DIR = os.path.dirname(BASE_DIR)
+sys.path.append(BASE_DIR)
+
+# -----------------------------------------------------------------------------
+# CONSTANTS
+# -----------------------------------------------------------------------------
+
+DATA_PATH = os.path.join(ROOT_DIR, 'data', 'Stanford3dDataset_v1.2_Aligned_Version')
+g_classes = [x.rstrip() for x in open(os.path.join(BASE_DIR, 'meta/s3dis/class_names.txt'))]
+g_class2label = {cls: i for i, cls in enumerate(g_classes)}
+g_class2color = {'ceiling': [0, 255, 0],
+ 'floor': [0, 0, 255],
+ 'wall': [0, 255, 255],
+ 'beam': [255, 255, 0],
+ 'column': [255, 0, 255],
+ 'window': [100, 100, 255],
+ 'door': [200, 200, 100],
+ 'table': [170, 120, 200],
+ 'chair': [255, 0, 0],
+ 'sofa': [200, 100, 100],
+ 'bookcase': [10, 200, 100],
+ 'board': [200, 200, 200],
+ 'clutter': [50, 50, 50]}
+g_easy_view_labels = [7, 8, 9, 10, 11, 1]
+g_label2color = {g_classes.index(cls): g_class2color[cls] for cls in g_classes}
+
+
+# -----------------------------------------------------------------------------
+# CONVERT ORIGINAL DATA TO OUR DATA_LABEL FILES
+# -----------------------------------------------------------------------------
+
+def collect_point_label(anno_path, out_filename, file_format='txt'):
+ """ Convert original dataset files to data_label file (each line is XYZRGBL).
+ We aggregated all the points from each instance in the room.
+
+ Args:
+ anno_path: path to annotations. e.g. Area_1/office_2/Annotations/
+ out_filename: path to save collected points and labels (each line is XYZRGBL)
+ file_format: txt or numpy, determines what file format to save.
+ Returns:
+ None
+ Note:
+ the points are shifted before save, the most negative point is now at origin.
+ """
+ points_list = []
+ for f in glob.glob(os.path.join(anno_path, '*.txt')):
+ cls = os.path.basename(f).split('_')[0]
+ # print(f)
+ if cls not in g_classes: # note: in some room there is 'staris' class..
+ cls = 'clutter'
+
+ points = np.loadtxt(f)
+ labels = np.ones((points.shape[0], 1)) * g_class2label[cls]
+ points_list.append(np.concatenate([points, labels], 1)) # Nx7
+
+ data_label = np.concatenate(points_list, 0)
+ xyz_min = np.amin(data_label, axis=0)[0:3]
+ data_label[:, 0:3] -= xyz_min
+
+ if file_format == 'txt':
+ fout = open(out_filename, 'w')
+ for i in range(data_label.shape[0]):
+ fout.write('%f %f %f %d %d %d %d\n' %
+ (data_label[i, 0], data_label[i, 1], data_label[i, 2],
+ data_label[i, 3], data_label[i, 4], data_label[i, 5],
+ data_label[i, 6]))
+ fout.close()
+ elif file_format == 'numpy':
+ np.save(out_filename, data_label)
+ else:
+ print('ERROR!! Unknown file format: %s, please use txt or numpy.' % file_format)
+ exit()
+
+
+def data_to_obj(data, name='example.obj', no_wall=True):
+ fout = open(name, 'w')
+ label = data[:, -1].astype(int)
+ for i in range(data.shape[0]):
+ if no_wall and ((label[i] == 2) or (label[i] == 0)):
+ continue
+ fout.write('v %f %f %f %d %d %d\n' % \
+ (data[i, 0], data[i, 1], data[i, 2], data[i, 3], data[i, 4], data[i, 5]))
+ fout.close()
+
+
+def point_label_to_obj(input_filename, out_filename, label_color=True, easy_view=False, no_wall=False):
+ """ For visualization of a room from data_label file,
+ input_filename: each line is X Y Z R G B L
+ out_filename: OBJ filename,
+ visualize input file by coloring point with label color
+ easy_view: only visualize furnitures and floor
+ """
+ data_label = np.loadtxt(input_filename)
+ data = data_label[:, 0:6]
+ label = data_label[:, -1].astype(int)
+ fout = open(out_filename, 'w')
+ for i in range(data.shape[0]):
+ color = g_label2color[label[i]]
+ if easy_view and (label[i] not in g_easy_view_labels):
+ continue
+ if no_wall and ((label[i] == 2) or (label[i] == 0)):
+ continue
+ if label_color:
+ fout.write('v %f %f %f %d %d %d\n' % \
+ (data[i, 0], data[i, 1], data[i, 2], color[0], color[1], color[2]))
+ else:
+ fout.write('v %f %f %f %d %d %d\n' % \
+ (data[i, 0], data[i, 1], data[i, 2], data[i, 3], data[i, 4], data[i, 5]))
+ fout.close()
+
+
+# -----------------------------------------------------------------------------
+# PREPARE BLOCK DATA FOR NETWORK TRAINING/TESTING
+# -----------------------------------------------------------------------------
+
+def sample_data(data, num_sample):
+ """ data is in N x ...
+ we want to keep (num_sample, C) of them.
+ if N > num_sample, we will randomly keep num_sample of them.
+ if N < num_sample, we will randomly duplicate samples.
+ """
+ N = data.shape[0]
+ if N == num_sample:
+ return data, range(N)
+ elif N > num_sample:
+ sample = np.random.choice(N, num_sample)
+ return data[sample, ...], sample
+ else:
+ sample = np.random.choice(N, num_sample - N)
+ dup_data = data[sample, ...]
+ return np.concatenate([data, dup_data], 0), list(range(N)) + list(sample)
+
+
+def sample_data_label(data, label, num_sample):
+ # randomly sub select or duplicate for up-sampling
+ new_data, sample_indices = sample_data(data, num_sample)
+ new_label = label[sample_indices]
+ return new_data, new_label
+
+
+def room2blocks(data, label, num_point, block_size=1.0, stride=1.0,
+ random_sample=False, sample_num=None, sample_aug=1):
+ """ Prepare block training data.
+ Args:
+ data: N x 6 numpy array, 012 are XYZ in meters, 345 are RGB in [0,1]
+ assumes the data is shifted (min point is origin) and aligned
+ (aligned with XYZ axis)
+ label: N size uint8 numpy array from 0-12
+ num_point: int, how many points to sample in each block
+ block_size: float, physical size of the block in meters
+ stride: float, stride for block sweeping
+ random_sample: bool, if True, we will randomly sample blocks in the room
+ sample_num: int, if random sample, how many blocks to sample
+ [default: room area]
+ sample_aug: if random sample, how much aug
+ Returns:
+ block_datas: K x num_point x 6 np array of XYZRGB, RGB is in [0,1]
+ block_labels: K x num_point x 1 np array of uint8 labels
+
+ TODO: for this version, blocking is in fixed, non-overlapping pattern.
+ """
+ assert (stride <= block_size)
+
+ limit = np.amax(data, 0)[0:3]
+
+ # Get the corner location for our sampling blocks
+ xbeg_list = []
+ ybeg_list = []
+ if not random_sample:
+ num_block_x = int(np.ceil((limit[0] - block_size) / stride)) + 1
+ num_block_y = int(np.ceil(collect_point_label(limit[1] - block_size) / stride)) + 1
+ for i in range(num_block_x):
+ for j in range(num_block_y):
+ xbeg_list.append(i * stride)
+ ybeg_list.append(j * stride)
+ else: # random sample blocks from the room, not used in gen_indoor3d_h5.py
+ num_block_x = int(np.ceil(limit[0] / block_size))
+ num_block_y = int(np.ceil(limit[1] / block_size))
+ if sample_num is None:
+ sample_num = num_block_x * num_block_y * sample_aug
+ for _ in range(sample_num):
+ xbeg = np.random.uniform(-block_size, limit[0])
+ ybeg = np.random.uniform(-block_size, limit[1])
+ xbeg_list.append(xbeg)
+ ybeg_list.append(ybeg)
+
+ # Collect blocks
+ block_data_list = []
+ block_label_list = []
+ for idx in range(len(xbeg_list)):
+ xbeg = xbeg_list[idx]
+ ybeg = ybeg_list[idx]
+ # xcond -> bool array with a shape of (Num_Total_Points, )
+ xcond = (data[:, 0] <= xbeg + block_size) & (data[:, 0] >= xbeg)
+ ycond = (data[:, 1] <= ybeg + block_size) & (data[:, 1] >= ybeg)
+ cond = xcond & ycond
+ if np.sum(cond) < 100: # discard block if there are less than 100 pts.
+ continue
+
+ block_data = data[cond, :]
+ block_label = label[cond]
+
+ # randomly subsample data
+ block_data_sampled, block_label_sampled = \
+ sample_data_label(block_data, block_label, num_point)
+ block_data_list.append(np.expand_dims(block_data_sampled, 0))
+ block_label_list.append(np.expand_dims(block_label_sampled, 0))
+
+ return np.concatenate(block_data_list, 0), np.concatenate(block_label_list, 0)
+
+
+def room2blocks_plus(data_label, num_point, block_size, stride,
+ random_sample, sample_num, sample_aug):
+ """ room2block with input filename and RGB pre-processing.
+ """
+ data = data_label[:, 0:6]
+ data[:, 3:6] /= 255.0
+ label = data_label[:, -1].astype(np.uint8)
+
+ return room2blocks(data, label, num_point, block_size, stride,
+ random_sample, sample_num, sample_aug)
+
+
+def room2blocks_wrapper(data_label_filename, num_point, block_size=1.0, stride=1.0,
+ random_sample=False, sample_num=None, sample_aug=1):
+ if data_label_filename[-3:] == 'txt':
+ data_label = np.loadtxt(data_label_filename)
+ elif data_label_filename[-3:] == 'npy':
+ data_label = np.load(data_label_filename)
+ else:
+ print('Unknown file type! exiting.')
+ exit()
+ return room2blocks_plus(data_label, num_point, block_size, stride,
+ random_sample, sample_num, sample_aug)
+
+
+def room2blocks_plus_normalized(data_label, num_point, block_size, stride,
+ random_sample, sample_num, sample_aug):
+ """ room2block, with input filename and RGB preprocessing.
+ for each block centralize XYZ, add normalized XYZ as 678 channels
+ """
+ data = data_label[:, 0:6]
+ data[:, 3:6] /= 255.0
+ label = data_label[:, -1].astype(np.uint8)
+ max_room_x = max(data[:, 0])
+ max_room_y = max(data[:, 1])
+ max_room_z = max(data[:, 2])
+
+ data_batch, label_batch = room2blocks(data, label, num_point, block_size, stride,
+ random_sample, sample_num, sample_aug)
+ new_data_batch = np.zeros((data_batch.shape[0], num_point, 9))
+ for b in range(data_batch.shape[0]):
+ new_data_batch[b, :, 6] = data_batch[b, :, 0] / max_room_x
+ new_data_batch[b, :, 7] = data_batch[b, :, 1] / max_room_y
+ new_data_batch[b, :, 8] = data_batch[b, :, 2] / max_room_z
+ minx = min(data_batch[b, :, 0])
+ miny = min(data_batch[b, :, 1])
+ data_batch[b, :, 0] -= (minx + block_size / 2)
+ data_batch[b, :, 1] -= (miny + block_size / 2)
+ new_data_batch[:, :, 0:6] = data_batch
+ return new_data_batch, label_batch
+
+
+def room2blocks_wrapper_normalized(data_label_filename, num_point, block_size=1.0, stride=1.0,
+ random_sample=False, sample_num=None, sample_aug=1):
+ if data_label_filename[-3:] == 'txt':
+ data_label = np.loadtxt(data_label_filename)
+ elif data_label_filename[-3:] == 'npy':
+ data_label = np.load(data_label_filename)
+ else:
+ print('Unknown file type! exiting.')
+ exit()
+ return room2blocks_plus_normalized(data_label, num_point, block_size, stride,
+ random_sample, sample_num, sample_aug)
+
+
+def room2samples(data, label, sample_num_point):
+ """ Prepare whole room samples.
+
+ Args:
+ data: N x 6 numpy array, 012 are XYZ in meters, 345 are RGB in [0,1]
+ assumes the data is shifted (min point is origin) and
+ aligned (aligned with XYZ axis)
+ label: N size uint8 numpy array from 0-12
+ sample_num_point: int, how many points to sample in each sample
+ Returns:
+ sample_datas: K x sample_num_point x 9
+ numpy array of XYZRGBX'Y'Z', RGB is in [0,1]
+ sample_labels: K x sample_num_point x 1 np array of uint8 labels
+ """
+ N = data.shape[0]
+ order = np.arange(N)
+ np.random.shuffle(order)
+ data = data[order, :]
+ label = label[order]
+
+ batch_num = int(np.ceil(N / float(sample_num_point)))
+ sample_datas = np.zeros((batch_num, sample_num_point, 6))
+ sample_labels = np.zeros((batch_num, sample_num_point, 1))
+
+ for i in range(batch_num):
+ beg_idx = i * sample_num_point
+ end_idx = min((i + 1) * sample_num_point, N)
+ num = end_idx - beg_idx
+ sample_datas[i, 0:num, :] = data[beg_idx:end_idx, :]
+ sample_labels[i, 0:num, 0] = label[beg_idx:end_idx]
+ if num < sample_num_point:
+ makeup_indices = np.random.choice(N, sample_num_point - num)
+ sample_datas[i, num:, :] = data[makeup_indices, :]
+ sample_labels[i, num:, 0] = label[makeup_indices]
+ return sample_datas, sample_labels
+
+
+def room2samples_plus_normalized(data_label, num_point):
+ """ room2sample, with input filename and RGB preprocessing.
+ for each block centralize XYZ, add normalized XYZ as 678 channels
+ """
+ data = data_label[:, 0:6]
+ data[:, 3:6] /= 255.0
+ label = data_label[:, -1].astype(np.uint8)
+ max_room_x = max(data[:, 0])
+ max_room_y = max(data[:, 1])
+ max_room_z = max(data[:, 2])
+ # print(max_room_x, max_room_y, max_room_z)
+
+ data_batch, label_batch = room2samples(data, label, num_point)
+ new_data_batch = np.zeros((data_batch.shape[0], num_point, 9))
+ for b in range(data_batch.shape[0]):
+ new_data_batch[b, :, 6] = data_batch[b, :, 0] / max_room_x
+ new_data_batch[b, :, 7] = data_batch[b, :, 1] / max_room_y
+ new_data_batch[b, :, 8] = data_batch[b, :, 2] / max_room_z
+ # minx = min(data_batch[b, :, 0])
+ # miny = min(data_batch[b, :, 1])
+ # data_batch[b, :, 0] -= (minx+block_size/2)
+ # data_batch[b, :, 1] -= (miny+block_size/2)
+ new_data_batch[:, :, 0:6] = data_batch
+ return new_data_batch, label_batch
+
+
+def room2samples_wrapper_normalized(data_label_filename, num_point):
+ if data_label_filename[-3:] == 'txt':
+ data_label = np.loadtxt(data_label_filename)
+ elif data_label_filename[-3:] == 'npy':
+ data_label = np.load(data_label_filename)
+ else:
+ print('Unknown file type! exiting.')
+ exit()
+ return room2samples_plus_normalized(data_label, num_point)
+
+
+# -----------------------------------------------------------------------------
+# EXTRACT INSTANCE BBOX FROM ORIGINAL DATA (for detection evaluation)
+# -----------------------------------------------------------------------------
+
+def collect_bounding_box(anno_path, out_filename):
+ """ Compute bounding boxes from each instance in original dataset files on
+ one room. **We assume the bbox is aligned with XYZ coordinate.**
+
+ Args:
+ anno_path: path to annotations. e.g. Area_1/office_2/Annotations/
+ out_filename: path to save instance bounding boxes for that room.
+ each line is x1 y1 z1 x2 y2 z2 label,
+ where (x1,y1,z1) is the point on the diagonal closer to origin
+ Returns:
+ None
+ Note:
+ room points are shifted, the most negative point is now at origin.
+ """
+ bbox_label_list = []
+
+ for f in glob.glob(os.path.join(anno_path, '*.txt')):
+ cls = os.path.basename(f).split('_')[0]
+ if cls not in g_classes: # note: in some room there is 'staris' class..
+ cls = 'clutter'
+ points = np.loadtxt(f)
+ label = g_class2label[cls]
+ # Compute tightest axis aligned bounding box
+ xyz_min = np.amin(points[:, 0:3], axis=0)
+ xyz_max = np.amax(points[:, 0:3], axis=0)
+ ins_bbox_label = np.expand_dims(
+ np.concatenate([xyz_min, xyz_max, np.array([label])], 0), 0)
+ bbox_label_list.append(ins_bbox_label)
+
+ bbox_label = np.concatenate(bbox_label_list, 0)
+ room_xyz_min = np.amin(bbox_label[:, 0:3], axis=0)
+ bbox_label[:, 0:3] -= room_xyz_min
+ bbox_label[:, 3:6] -= room_xyz_min
+
+ fout = open(out_filename, 'w')
+ for i in range(bbox_label.shape[0]):
+ fout.write('%f %f %f %f %f %f %d\n' % \
+ (bbox_label[i, 0], bbox_label[i, 1], bbox_label[i, 2],
+ bbox_label[i, 3], bbox_label[i, 4], bbox_label[i, 5],
+ bbox_label[i, 6]))
+ fout.close()
+
+
+def bbox_label_to_obj(input_filename, out_filename_prefix, easy_view=False):
+ """ Visualization of bounding boxes.
+
+ Args:
+ input_filename: each line is x1 y1 z1 x2 y2 z2 label
+ out_filename_prefix: OBJ filename prefix,
+ visualize object by g_label2color
+ easy_view: if True, only visualize furniture and floor
+ Returns:
+ output a list of OBJ file and MTL files with the same prefix
+ """
+ bbox_label = np.loadtxt(input_filename)
+ bbox = bbox_label[:, 0:6]
+ label = bbox_label[:, -1].astype(int)
+ v_cnt = 0 # count vertex
+ ins_cnt = 0 # count instance
+ for i in range(bbox.shape[0]):
+ if easy_view and (label[i] not in g_easy_view_labels):
+ continue
+ obj_filename = out_filename_prefix + '_' + g_classes[label[i]] + '_' + str(ins_cnt) + '.obj'
+ mtl_filename = out_filename_prefix + '_' + g_classes[label[i]] + '_' + str(ins_cnt) + '.mtl'
+ fout_obj = open(obj_filename, 'w')
+ fout_mtl = open(mtl_filename, 'w')
+ fout_obj.write('mtllib %s\n' % (os.path.basename(mtl_filename)))
+
+ length = bbox[i, 3:6] - bbox[i, 0:3]
+ a = length[0]
+ b = length[1]
+ c = length[2]
+ x = bbox[i, 0]
+ y = bbox[i, 1]
+ z = bbox[i, 2]
+ color = np.array(g_label2color[label[i]], dtype=float) / 255.0
+
+ material = 'material%d' % ins_cnt
+ fout_obj.write('usemtl %s\n' % material)
+ fout_obj.write('v %f %f %f\n' % (x, y, z + c))
+ fout_obj.write('v %f %f %f\n' % (x, y + b, z + c))
+ fout_obj.write('v %f %f %f\n' % (x + a, y + b, z + c))
+ fout_obj.write('v %f %f %f\n' % (x + a, y, z + c))
+ fout_obj.write('v %f %f %f\n' % (x, y, z))
+ fout_obj.write('v %f %f %f\n' % (x, y + b, z))
+ fout_obj.write('v %f %f %f\n' % (x + a, y + b, z))
+ fout_obj.write('v %f %f %f\n' % (x + a, y, z))
+ fout_obj.write('g default\n')
+ v_cnt = 0 # for individual box
+ fout_obj.write('f %d %d %d %d\n' % (4 + v_cnt, 3 + v_cnt, 2 + v_cnt, 1 + v_cnt))
+ fout_obj.write('f %d %d %d %d\n' % (1 + v_cnt, 2 + v_cnt, 6 + v_cnt, 5 + v_cnt))
+ fout_obj.write('f %d %d %d %d\n' % (7 + v_cnt, 6 + v_cnt, 2 + v_cnt, 3 + v_cnt))
+ fout_obj.write('f %d %d %d %d\n' % (4 + v_cnt, 8 + v_cnt, 7 + v_cnt, 3 + v_cnt))
+ fout_obj.write('f %d %d %d %d\n' % (5 + v_cnt, 8 + v_cnt, 4 + v_cnt, 1 + v_cnt))
+ fout_obj.write('f %d %d %d %d\n' % (5 + v_cnt, 6 + v_cnt, 7 + v_cnt, 8 + v_cnt))
+ fout_obj.write('\n')
+
+ fout_mtl.write('newmtl %s\n' % material)
+ fout_mtl.write('Kd %f %f %f\n' % (color[0], color[1], color[2]))
+ fout_mtl.write('\n')
+ fout_obj.close()
+ fout_mtl.close()
+
+ v_cnt += 8
+ ins_cnt += 1
+
+
+def bbox_label_to_obj_room(input_filename, out_filename_prefix, easy_view=False,
+ permute=None, center=False, exclude_table=False):
+ """ Visualization of bounding boxes.
+
+ Args:
+ input_filename: each line is x1 y1 z1 x2 y2 z2 label
+ out_filename_prefix: OBJ filename prefix,
+ visualize object by g_label2color
+ easy_view: if True, only visualize furniture and floor
+ permute: if not None, permute XYZ for rendering, e.g. [0 2 1]
+ center: if True, move obj to have zero origin
+ Returns:
+ output a list of OBJ file and MTL files with the same prefix
+ """
+ bbox_label = np.loadtxt(input_filename)
+ bbox = bbox_label[:, 0:6]
+ if permute is not None:
+ assert (len(permute) == 3)
+ permute = np.array(permute)
+ bbox[:, 0:3] = bbox[:, permute]
+ bbox[:, 3:6] = bbox[:, permute + 3]
+ if center:
+ xyz_max = np.amax(bbox[:, 3:6], 0)
+ bbox[:, 0:3] -= (xyz_max / 2.0)
+ bbox[:, 3:6] -= (xyz_max / 2.0)
+ bbox /= np.max(xyz_max / 2.0)
+ label = bbox_label[:, -1].astype(int)
+ obj_filename = out_filename_prefix + '.obj'
+ mtl_filename = out_filename_prefix + '.mtl'
+
+ fout_obj = open(obj_filename, 'w')
+ fout_mtl = open(mtl_filename, 'w')
+ fout_obj.write('mtllib %s\n' % (os.path.basename(mtl_filename)))
+ v_cnt = 0 # count vertex
+ ins_cnt = 0 # count instance
+ for i in range(bbox.shape[0]):
+ if easy_view and (label[i] not in g_easy_view_labels):
+ continue
+ if exclude_table and label[i] == g_classes.index('table'):
+ continue
+
+ length = bbox[i, 3:6] - bbox[i, 0:3]
+ a = length[0]
+ b = length[1]
+ c = length[2]
+ x = bbox[i, 0]
+ y = bbox[i, 1]
+ z = bbox[i, 2]
+ color = np.array(g_label2color[label[i]], dtype=float) / 255.0
+
+ material = 'material%d' % ins_cnt
+ fout_obj.write('usemtl %s\n' % material)
+ fout_obj.write('v %f %f %f\n' % (x, y, z + c))
+ fout_obj.write('v %f %f %f\n' % (x, y + b, z + c))
+ fout_obj.write('v %f %f %f\n' % (x + a, y + b, z + c))
+ fout_obj.write('v %f %f %f\n' % (x + a, y, z + c))
+ fout_obj.write('v %f %f %f\n' % (x, y, z))
+ fout_obj.write('v %f %f %f\n' % (x, y + b, z))
+ fout_obj.write('v %f %f %f\n' % (x + a, y + b, z))
+ fout_obj.write('v %f %f %f\n' % (x + a, y, z))
+ fout_obj.write('g default\n')
+ fout_obj.write('f %d %d %d %d\n' % (4 + v_cnt, 3 + v_cnt, 2 + v_cnt, 1 + v_cnt))
+ fout_obj.write('f %d %d %d %d\n' % (1 + v_cnt, 2 + v_cnt, 6 + v_cnt, 5 + v_cnt))
+ fout_obj.write('f %d %d %d %d\n' % (7 + v_cnt, 6 + v_cnt, 2 + v_cnt, 3 + v_cnt))
+ fout_obj.write('f %d %d %d %d\n' % (4 + v_cnt, 8 + v_cnt, 7 + v_cnt, 3 + v_cnt))
+ fout_obj.write('f %d %d %d %d\n' % (5 + v_cnt, 8 + v_cnt, 4 + v_cnt, 1 + v_cnt))
+ fout_obj.write('f %d %d %d %d\n' % (5 + v_cnt, 6 + v_cnt, 7 + v_cnt, 8 + v_cnt))
+ fout_obj.write('\n')
+
+ fout_mtl.write('newmtl %s\n' % material)
+ fout_mtl.write('Kd %f %f %f\n' % (color[0], color[1], color[2]))
+ fout_mtl.write('\n')
+
+ v_cnt += 8
+ ins_cnt += 1
+
+ fout_obj.close()
+ fout_mtl.close()
+
+
+def collect_point_bounding_box(anno_path, out_filename, file_format):
+ """ Compute bounding boxes from each instance in original dataset files on
+ one room. **We assume the bbox is aligned with XYZ coordinate.**
+ Save both the point XYZRGB and the bounding box for the point's
+ parent element.
+
+ Args:
+ anno_path: path to annotations. e.g. Area_1/office_2/Annotations/
+ out_filename: path to save instance bounding boxes for each point,
+ plus the point's XYZRGBL
+ each line is XYZRGBL offsetX offsetY offsetZ a b c,
+ where cx = X+offsetX, cy=X+offsetY, cz=Z+offsetZ
+ where (cx,cy,cz) is center of the box, a,b,c are distances from center
+ to the surfaces of the box, i.e. x1 = cx-a, x2 = cx+a, y1=cy-b etc.
+ file_format: output file format, txt or numpy
+ Returns:
+ None
+
+ Note:
+ room points are shifted, the most negative point is now at origin.
+ """
+ point_bbox_list = []
+
+ for f in glob.glob(os.path.join(anno_path, '*.txt')):
+ cls = os.path.basename(f).split('_')[0]
+ if cls not in g_classes: # note: in some room there is 'stairs' class..
+ cls = 'clutter'
+ points = np.loadtxt(f) # Nx6
+ label = g_class2label[cls] # N,
+ # Compute tightest axis aligned bounding box
+ xyz_min = np.amin(points[:, 0:3], axis=0) # 3,
+ xyz_max = np.amax(points[:, 0:3], axis=0) # 3,
+ xyz_center = (xyz_min + xyz_max) / 2
+ dimension = (xyz_max - xyz_min) / 2
+
+ xyz_offsets = xyz_center - points[:, 0:3] # Nx3
+ dimensions = np.ones((points.shape[0], 3)) * dimension # Nx3
+ labels = np.ones((points.shape[0], 1)) * label # N
+ point_bbox_list.append(np.concatenate([points, labels,
+ xyz_offsets, dimensions], 1)) # Nx13
+
+ point_bbox = np.concatenate(point_bbox_list, 0) # KxNx13
+ room_xyz_min = np.amin(point_bbox[:, 0:3], axis=0)
+ point_bbox[:, 0:3] -= room_xyz_min
+
+ if file_format == 'txt':
+ fout = open(out_filename, 'w')
+ for i in range(point_bbox.shape[0]):
+ fout.write('%f %f %f %d %d %d %d %f %f %f %f %f %f\n' %
+ (point_bbox[i, 0], point_bbox[i, 1], point_bbox[i, 2],
+ point_bbox[i, 3], point_bbox[i, 4], point_bbox[i, 5],
+ point_bbox[i, 6],
+ point_bbox[i, 7], point_bbox[i, 8], point_bbox[i, 9],
+ point_bbox[i, 10], point_bbox[i, 11], point_bbox[i, 12]))
+
+ fout.close()
+ elif file_format == 'numpy':
+ np.save(out_filename, point_bbox)
+ else:
+ print('ERROR!! Unknown file format: %s, please use txt or numpy.' % file_format)
+ exit()
diff --git a/zoo/OcCo/OcCo_Torch/utils/lmdb2hdf5.py b/zoo/OcCo/OcCo_Torch/utils/lmdb2hdf5.py
new file mode 100644
index 0000000..967834a
--- /dev/null
+++ b/zoo/OcCo/OcCo_Torch/utils/lmdb2hdf5.py
@@ -0,0 +1,117 @@
+# Copyright (c) 2020. Hanchen Wang, hw501@cam.ac.uk
+
+import os, h5py, json, argparse, numpy as np
+from LMDB_DataFlow import lmdb_dataflow
+from tqdm import tqdm
+
+
+def fix2len(point_cloud, fix_length):
+ if len(point_cloud) >= fix_length:
+ point_cloud = point_cloud[np.random.choice(len(point_cloud), fix_length)]
+ else:
+ point_cloud = np.concatenate(
+ [point_cloud, point_cloud[np.random.choice(len(point_cloud), fix_length - len(point_cloud))]], axis=0)
+ return point_cloud
+
+
+if __name__ == "__main__":
+
+ parser = argparse.ArgumentParser()
+ parser.add_argument("--fname", type=str, default='train')
+ parser.add_argument("--lmdb_path", type=str, default=r'../data/modelnet40_pcn/')
+ parser.add_argument("--hdf5_path", type=str, default=r'../data/modelnet40_pcn/hdf5_partial_1024')
+ parser.add_argument("--partial", action='store_true', help='store partial scan or not')
+ parser.add_argument('--num_per_obj', type=int, default=1024)
+ parser.add_argument('--num_scan', type=int, default=10)
+
+ args = parser.parse_args()
+
+ lmdb_file = os.path.join(args.lmdb_path, args.f_name + '.lmdb')
+ os.system('mkdir -p %s' % args.hdf5_path)
+ df_train, num_train = lmdb_dataflow(
+ lmdb_path=lmdb_file, batch_size=1, input_size=args.num_per_obj,
+ output_size=args.num_per_obj, is_training=False)
+
+ if args.partial:
+ print('Now we generate point cloud from partial observed objects.')
+
+ file_per_h5 = 2048 * 4 # of objects within each hdf5 file
+ data_gen = df_train.get_data()
+
+ idx = 0
+ data_np = np.zeros((file_per_h5, args.num_per_obj, 3))
+ label_np = np.zeros((file_per_h5,), dtype=np.int32)
+ ids_np = np.chararray((file_per_h5,), itemsize=32)
+
+ # convert label string to integers
+ hash_label = json.load(open('../data/shapenet_names.json'))
+ f_open = open(os.path.join(args.hdf5_path, '%s_file.txt' % args.f_name), 'a+')
+
+ for i in tqdm(range(num_train)):
+ '''each object has eight different views'''
+
+ ids, inputs, npts, gt = next(data_gen)
+ object_pc = inputs[0] if args.partial else gt[0]
+
+ if len(object_pc) != args.num_per_obj:
+ object_pc = fix2len(object_pc, args.num_per_obj)
+ if args.partial:
+ data_np[i % file_per_h5, :, :] = object_pc
+ label_np[i % file_per_h5] = int(hash_label[(ids[0].split('_')[0])])
+ ids_np[i % file_per_h5] = ids[0] # .split('_')[1]
+
+ else:
+ if i % args.num_scan != 0:
+ continue
+ data_np[(i // args.num_scan) % file_per_h5, :, :] = object_pc
+ label_np[(i // args.num_scan) % file_per_h5] = int(hash_label[(ids[0].split('_')[0])])
+ ids_np[(i // args.num_scan) % file_per_h5] = ids[0].split('_')[1]
+
+ num_obj_ = i if args.partial else i // args.num_scan
+
+ if num_obj_ - idx * file_per_h5 >= file_per_h5:
+ h5_file = os.path.join(args.hdf5_path, '%s%d.h5' % (args.f_name, idx))
+ print('the last two objects coordinates, labels and ids:')
+ print(data_np[-2:])
+ print(label_np[-2:])
+ print(ids_np[-2:])
+ print('\n')
+
+ hf = h5py.File(h5_file, 'w')
+ hf.create_dataset('data', data=data_np)
+ hf.create_dataset('label', data=label_np)
+ hf.create_dataset('id', data=ids_np)
+ hf.close()
+
+ f_open.writelines(h5_file.replace('../', './') + '\n')
+ print('%s_%s.h5 has been saved' % (args.f_name, idx))
+ print('====================\n\n')
+ idx += 1
+
+ '''to deal with the remaining in the end'''
+ h5_file = os.path.join(args.hdf5_path, '%s%d.h5' % (args.f_name, idx))
+ hf = h5py.File(h5_file, 'w')
+
+ if args.partial:
+ label_res = label_np[:num_train % file_per_h5]
+ data_res = data_np[:num_train % file_per_h5]
+ id_res = ids_np[:num_train % file_per_h5]
+
+ else:
+ label_res = label_np[:(num_train // args.num_scan) % file_per_h5]
+ data_res = data_np[:(num_train // args.num_scan) % file_per_h5]
+ id_res = ids_np[:(num_train // args.num_scan) % file_per_h5]
+
+ print('the remaining objects coordinates, labels and ids:')
+ print(data_res[-2:], '\n', label_res[-2:], '\n', id_res[-2:], '\n\n')
+
+ hf.create_dataset('label', data=label_res)
+ hf.create_dataset('data', data=data_res)
+ hf.create_dataset('id', data=id_res)
+ hf.close()
+ print('the last part has been saved into %s_%s.h5' % (args.f_name, idx))
+
+ f_open.writelines(h5_file.replace('../', './'))
+ f_open.close()
+
+ print('convert from lmdb to hdf5 has finished')
diff --git a/zoo/OcCo/OcCo_Torch/utils/train_cls.py b/zoo/OcCo/OcCo_Torch/utils/train_cls.py
new file mode 100644
index 0000000..38dd1b1
--- /dev/null
+++ b/zoo/OcCo/OcCo_Torch/utils/train_cls.py
@@ -0,0 +1,210 @@
+# Copyright (c) 2020. Hanchen Wang, hw501@cam.ac.uk
+# Ref: https://github.com/WangYueFt/dgcnn/blob/master/pytorch/main.py
+# Ref: https://github.com/yanx27/Pointnet_Pointnet2_pytorch/blob/master/train_cls.py
+
+import os, sys, torch, shutil, importlib, argparse
+sys.path.append('utils')
+sys.path.append('models')
+from PC_Augmentation import random_point_dropout, random_scale_point_cloud, random_shift_point_cloud
+from torch.optim.lr_scheduler import CosineAnnealingLR, StepLR
+from ModelNetDataLoader import General_CLSDataLoader_HDF5
+from Torch_Utility import copy_parameters, seed_torch
+from torch.utils.tensorboard import SummaryWriter
+# from Inference_Timer import Inference_Timer
+from torch.utils.data import DataLoader
+from Dataset_Loc import Dataset_Loc
+from TrainLogger import TrainLogger
+from tqdm import tqdm
+
+
+def parse_args():
+ parser = argparse.ArgumentParser('Point Cloud Classification')
+
+ ''' === Training and Model === '''
+ parser.add_argument('--log_dir', type=str, help='log folder [default: ]')
+ parser.add_argument('--gpu', type=str, default='0', help='GPU [default: 0]')
+ parser.add_argument('--mode', type=str, default='train', help='train or test')
+ parser.add_argument('--epoch', type=int, default=200, help='epochs [default: 200]')
+ # parser.add_argument('--seed', type=int, default=1, help='random seed (default: 1)')
+ parser.add_argument('--batch_size', type=int, default=24, help='batch size [default: 24]')
+ parser.add_argument('--model', default='pointnet_cls', help='model [default: pointnet_cls]')
+ parser.add_argument('--dropout', type=float, default=0.5, help='dropout rate [default: 0.5]')
+ parser.add_argument('--momentum', type=float, default=0.9, help='SGD momentum [default: 0.9]')
+ parser.add_argument('--lr_decay', type=float, default=0.5, help='lr decay rate [default: 0.5]')
+ parser.add_argument('--step_size', type=int, default=20, help='lr decay step [default: 20 eps]')
+ parser.add_argument('--num_point', type=int, default=1024, help='points number [default: 1024]')
+ parser.add_argument('--restore', action='store_true', help='using pre-trained [default: False]')
+ parser.add_argument('--restore_path', type=str, help="path to pretrained weights [default: None]")
+ parser.add_argument('--emb_dims', type=int, default=1024, help='dimension of embeddings [default: 1024]')
+ parser.add_argument('--k', type=int, default=20, help='number of nearest neighbors to use [default: 20]')
+ parser.add_argument('--use_sgd', action='store_true', default=False, help='use SGD optimiser [default: False]')
+ parser.add_argument('--lr', type=float, default=0.001, help='learning rate [default: 0.001, 0.1 if using sgd]')
+ parser.add_argument('--scheduler', type=str, default='step', help='lr decay scheduler [default: step, or cos]')
+
+ ''' === Dataset === '''
+ parser.add_argument('--partial', action='store_true', help='partial objects [default: False]')
+ parser.add_argument('--bn', action='store_true', help='with background noise [default: False]')
+ parser.add_argument('--data_aug', action='store_true', help='data Augmentation [default: False]')
+ parser.add_argument('--dataset', type=str, default='modelnet40', help='dataset [default: modelnet40]')
+ parser.add_argument('--fname', type=str, help='filename, used in ScanObjectNN or fewer data [default:]')
+
+ return parser.parse_args()
+
+
+def main(args):
+
+ os.environ["CUDA_DEVICE_ORDER"] = "PCI_BUS_ID"
+ os.environ["CUDA_VISIBLE_DEVICES"] = args.gpu
+ # seed_torch(args.seed)
+
+ ''' === Set up Loggers and Load Data === '''
+ MyLogger = TrainLogger(args, name=args.model.upper(), subfold='cls', filename=args.mode + '_log')
+ writer = SummaryWriter(os.path.join(MyLogger.experiment_dir, 'runs'))
+
+ MyLogger.logger.info('Load dataset %s' % args.dataset)
+ NUM_CLASSES, TRAIN_FILES, TEST_FILES = Dataset_Loc(dataset=args.dataset, fname=args.fname,
+ partial=args.partial, bn=args.bn)
+ TRAIN_DATASET = General_CLSDataLoader_HDF5(file_list=TRAIN_FILES, num_point=1024)
+ TEST_DATASET = General_CLSDataLoader_HDF5(file_list=TEST_FILES, num_point=1024)
+ trainDataLoader = DataLoader(TRAIN_DATASET, batch_size=args.batch_size, shuffle=True, num_workers=4, drop_last=True)
+ testDataLoader = DataLoader(TEST_DATASET, batch_size=args.batch_size, shuffle=False, num_workers=4)
+
+ ''' === Load Model and Backup Scripts === '''
+ MODEL = importlib.import_module(args.model)
+ shutil.copy(os.path.abspath(__file__), MyLogger.log_dir)
+ shutil.copy('./models/%s.py' % args.model, MyLogger.log_dir)
+
+ device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
+ classifier = MODEL.get_model(args=args, num_channel=3, num_class=NUM_CLASSES).to(device)
+ criterion = MODEL.get_loss().to(device)
+ classifier = torch.nn.DataParallel(classifier)
+ # nn.DataParallel has its own issues (slow, memory expensive),
+ # here are some advanced solutions: https://zhuanlan.zhihu.com/p/145427849
+ print('=' * 27)
+ print('Using %d GPU,' % torch.cuda.device_count(), 'Indices: %s' % args.gpu)
+ print('=' * 27)
+
+ ''' === Restore Model from Pre-Trained Checkpoints: OcCo/Jigsaw etc === '''
+ if args.restore:
+ checkpoint = torch.load(args.restore_path)
+ classifier = copy_parameters(classifier, checkpoint, verbose=True)
+ MyLogger.logger.info('Use pre-trained weights from %s' % args.restore_path)
+ else:
+ MyLogger.logger.info('No pre-trained weights, start training from scratch...')
+
+ if not args.use_sgd:
+ optimizer = torch.optim.Adam(
+ classifier.parameters(),
+ lr=args.lr,
+ betas=(0.9, 0.999),
+ eps=1e-08,
+ weight_decay=1e-4
+ )
+ else:
+ optimizer = torch.optim.SGD(classifier.parameters(),
+ lr=args.lr * 100,
+ momentum=args.momentum,
+ weight_decay=1e-4)
+
+ if args.scheduler == 'cos':
+ scheduler = CosineAnnealingLR(optimizer, T_max=args.epoch, eta_min=1e-3)
+ else:
+ scheduler = StepLR(optimizer, step_size=args.step_size, gamma=args.lr_decay)
+ LEARNING_RATE_CLIP = 0.01 * args.lr
+
+ if args.mode == 'test':
+ with torch.no_grad():
+ classifier.eval()
+ MyLogger.epoch_init(training=False)
+
+ for points, target in tqdm(testDataLoader, total=len(testDataLoader), smoothing=0.9):
+ points, target = points.float().transpose(2, 1).cuda(), target.long().cuda()
+ if args.model == 'pointnet_cls':
+ pred, trans_feat = classifier(points)
+ loss = criterion(pred, target, trans_feat)
+ else:
+ pred = classifier(points)
+ loss = criterion(pred, target)
+ MyLogger.step_update(pred.data.max(1)[1].cpu().numpy(),
+ target.long().cpu().numpy(),
+ loss.cpu().detach().numpy())
+
+ MyLogger.epoch_summary(writer=writer, training=False)
+ sys.exit("Test Finished")
+
+ for epoch in range(MyLogger.epoch, args.epoch + 1):
+
+ ''' === Training === '''
+ MyLogger.epoch_init()
+
+ for points, target in tqdm(trainDataLoader, total=len(trainDataLoader), smoothing=0.9):
+ writer.add_scalar('Learning Rate', scheduler.get_lr()[-1], MyLogger.step)
+
+ # Augmentation, might bring performance gains
+ if args.data_aug:
+ points = random_point_dropout(points.data.numpy())
+ points[:, :, :3] = random_scale_point_cloud(points[:, :, :3])
+ points[:, :, :3] = random_shift_point_cloud(points[:, :, :3])
+ points = torch.Tensor(points)
+
+ points, target = points.transpose(2, 1).float().cuda(), target.long().cuda()
+
+ # FP and BP
+ classifier.train()
+ optimizer.zero_grad()
+ if args.model == 'pointnet_cls':
+ pred, trans_feat = classifier(points)
+ loss = criterion(pred, target, trans_feat)
+ else:
+ pred = classifier(points)
+ loss = criterion(pred, target)
+ loss.backward()
+ optimizer.step()
+ MyLogger.step_update(pred.data.max(1)[1].cpu().numpy(),
+ target.long().cpu().numpy(),
+ loss.cpu().detach().numpy())
+ MyLogger.epoch_summary(writer=writer, training=True)
+
+ ''' === Validating === '''
+ with torch.no_grad():
+ classifier.eval()
+ MyLogger.epoch_init(training=False)
+
+ for points, target in tqdm(testDataLoader, total=len(testDataLoader), smoothing=0.9):
+ points, target = points.float().transpose(2, 1).cuda(), target.long().cuda()
+ if args.model == 'pointnet_cls':
+ pred, trans_feat = classifier(points)
+ loss = criterion(pred, target, trans_feat)
+ else:
+ pred = classifier(points)
+ loss = criterion(pred, target)
+ MyLogger.step_update(pred.data.max(1)[1].cpu().numpy(),
+ target.long().cpu().numpy(),
+ loss.cpu().detach().numpy())
+
+ MyLogger.epoch_summary(writer=writer, training=False)
+ if MyLogger.save_model:
+ state = {
+ 'step': MyLogger.step,
+ 'epoch': MyLogger.best_instance_epoch,
+ 'instance_acc': MyLogger.best_instance_acc,
+ 'best_class_acc': MyLogger.best_class_acc,
+ 'best_class_epoch': MyLogger.best_class_epoch,
+ 'model_state_dict': classifier.state_dict(),
+ 'optimizer_state_dict': optimizer.state_dict(),
+ }
+ torch.save(state, MyLogger.savepath)
+
+ scheduler.step()
+ if args.scheduler == 'step':
+ for param_group in optimizer.param_groups:
+ if optimizer.param_groups[0]['lr'] < LEARNING_RATE_CLIP:
+ param_group['lr'] = LEARNING_RATE_CLIP
+
+ MyLogger.train_summary()
+
+
+if __name__ == '__main__':
+
+ args = parse_args()
+ main(args)
diff --git a/zoo/OcCo/OcCo_Torch/utils/train_completion.py b/zoo/OcCo/OcCo_Torch/utils/train_completion.py
new file mode 100644
index 0000000..9bd83d1
--- /dev/null
+++ b/zoo/OcCo/OcCo_Torch/utils/train_completion.py
@@ -0,0 +1,253 @@
+# Copyright (c) 2020. Hanchen Wang, hw501@cam.ac.uk
+# Ref: https://github.com/wentaoyuan/pcn/blob/master/train.py
+# Ref: https://github.com/WangYueFt/dgcnn/blob/master/tensorflow/train.py
+# For DGCNN Encoder, We Also Use Adam + StepLR for the Unity and Simplicity
+
+
+import os, sys, time, torch, shutil, argparse, datetime, importlib, numpy as np
+sys.path.append('utils')
+sys.path.append('models')
+from TrainLogger import TrainLogger
+from LMDB_DataFlow import lmdb_dataflow
+from Torch_Utility import copy_parameters
+# from torch.optim.lr_scheduler import StepLR
+from Visu_Utility import plot_pcd_three_views
+from torch.utils.tensorboard import SummaryWriter
+
+
+def parse_args():
+ parser = argparse.ArgumentParser('Point Cloud Completion')
+
+ ''' === Training Setting === '''
+ parser.add_argument('--log_dir', type=str, help='log folder [default: ]')
+ parser.add_argument('--gpu', type=str, default='0', help='GPU [default: 0]')
+ parser.add_argument('--batch_size', type=int, default=32, help='batch size [default: 32]')
+ parser.add_argument('--epoch', type=int, default=50, help='number of epoch [default: 50]')
+ parser.add_argument('--lr', type=float, default=0.0001, help='learning rate [default: 1e-4]')
+ parser.add_argument('--lr_decay', type=float, default=0.7, help='lr decay rate [default: 0.7]')
+ parser.add_argument('--step_size', type=int, default=20, help='lr decay step [default: 20 epoch]')
+ parser.add_argument('--dataset', type=str, default='modelnet', help='dataset [default: modelnet]')
+ parser.add_argument('--restore', action='store_true', help='loaded from restore [default: False]')
+ parser.add_argument('--restore_path', type=str, help='path to saved pre-trained model [default: ]')
+ parser.add_argument('--steps_print', type=int, default=100, help='# steps to print [default: 100]')
+ parser.add_argument('--steps_visu', type=int, default=3456, help='# steps to visual [default: 3456]')
+ parser.add_argument('--steps_eval', type=int, default=1000, help='# steps to evaluate [default: 1e3]')
+ parser.add_argument('--epochs_save', type=int, default=5, help='# epochs to save [default: 5 epochs]')
+
+ ''' === Model Setting === '''
+ parser.add_argument('--model', type=str, default='pcn_occo', help='model [pcn_occo]')
+ parser.add_argument('--k', type=int, default=20, help='# nearest neighbors in DGCNN [20]')
+ parser.add_argument('--grid_size', type=int, default=4, help='edge length of the 2D grid [4]')
+ parser.add_argument('--grid_scale', type=float, default=0.5, help='scale of the 2D grid [0.5]')
+ parser.add_argument('--num_coarse', type=int, default=1024, help='# points in coarse gt [1024]')
+ parser.add_argument('--emb_dims', type=int, default=1024, help='# dimension of DGCNN encoder [1024]')
+ parser.add_argument('--input_pts', type=int, default=1024, help='# points of occluded inputs [1024]')
+ parser.add_argument('--gt_pts', type=int, default=16384, help='# points of ground truth inputs [16384]')
+
+ return parser.parse_args()
+
+
+def main(args):
+ os.environ["CUDA_DEVICE_ORDER"] = "PCI_BUS_ID"
+ os.environ["CUDA_VISIBLE_DEVICES"] = args.gpu
+
+ ''' === Set up Loggers and Load Data === '''
+ MyLogger = TrainLogger(args, name=args.model.upper(), subfold='completion')
+ os.makedirs(os.path.join(MyLogger.experiment_dir, 'plots'), exist_ok=True)
+ writer = SummaryWriter(os.path.join(MyLogger.experiment_dir, 'runs'))
+
+ MyLogger.logger.info('Load dataset %s' % args.dataset)
+ if args.dataset == 'modelnet':
+ lmdb_train = './data/modelnet/train.lmdb'
+ lmdb_valid = './data/modelnet/test.lmdb'
+ elif args.dataset == 'shapenet':
+ lmdb_train = 'data/shapenet/train.lmdb'
+ lmdb_valid = 'data/shapenet/valid.lmdb'
+ else:
+ raise ValueError("Dataset is not available, it should be either ModelNet or ShapeNet")
+
+ assert (args.gt_pts == args.grid_size ** 2 * args.num_coarse)
+ df_train, num_train = lmdb_dataflow(
+ lmdb_train, args.batch_size, args.input_pts, args.gt_pts, is_training=True)
+ df_valid, num_valid = lmdb_dataflow(
+ lmdb_valid, args.batch_size, args.input_pts, args.gt_pts, is_training=False)
+ train_gen, valid_gen = df_train.get_data(), df_valid.get_data()
+ total_steps = num_train // args.batch_size * args.epoch
+
+ ''' === Load Model and Backup Scripts === '''
+ MODEL = importlib.import_module(args.model)
+ shutil.copy(os.path.abspath(__file__), MyLogger.log_dir)
+ shutil.copy('./models/%s.py' % args.model, MyLogger.log_dir)
+
+ # multiple GPUs usage
+ device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
+ completer = MODEL.get_model(args=args, grid_size=args.grid_size,
+ grid_scale=args.grid_scale, num_coarse=args.num_coarse).to(device)
+ criterion = MODEL.get_loss().to(device)
+ completer = torch.nn.DataParallel(completer)
+ # nn.DataParallel has its own issues (slow, memory expensive), bearable
+ # some optional advanced solutions: https://zhuanlan.zhihu.com/p/145427849
+ print('=' * 33)
+ print('Using %d GPU,' % torch.cuda.device_count(), 'Indices are: %s' % args.gpu)
+ print('=' * 33)
+
+ ''' === Restore Model from Checkpoints, If there is any === '''
+ if args.restore:
+ checkpoint = torch.load(args.restore_path)
+ completer = copy_parameters(completer, checkpoint, verbose=True)
+ MyLogger.logger.info('Use pre-trained model from %s' % args.restore_path)
+ MyLogger.step, MyLogger.epoch = checkpoint['step'], checkpoint['epoch']
+
+ else:
+ MyLogger.logger.info('No pre-trained model, start training from scratch...')
+
+ ''' IMPORTANT: for completion, no weight decay in Adam, no batch norm in decoder!'''
+ optimizer = torch.optim.Adam(
+ completer.parameters(),
+ lr=args.lr,
+ betas=(0.9, 0.999),
+ eps=1e-08,
+ weight_decay=0)
+ # weight_decay=1e-4)
+
+ # For the sake of simplicity, we save the momentum decay in the batch norm
+ # scheduler = StepLR(optimizer, step_size=20, gamma=0.7) -> instead we define these manually
+ LEARNING_RATE_CLIP = 0.01 * args.lr
+
+ def vary2fix(inputs, npts, batch_size=args.batch_size, num_point=args.input_pts):
+ """upsample/downsample varied input points into fixed length
+ :param inputs: input points cloud
+ :param npts: describe how many points of each input object
+ :param batch_size: training batch size
+ :param num_point: number of points of per occluded object
+ :return: fixed length of points of each object
+ """
+
+ inputs_ls = np.split(inputs[0], npts.cumsum())
+ ret_inputs = np.zeros((1, batch_size * num_point, 3))
+ ret_npts = npts.copy()
+
+ for idx, obj in enumerate(inputs_ls[:-1]):
+
+ if len(obj) <= num_point:
+ select_idx = np.concatenate([
+ np.arange(len(obj)), np.random.choice(len(obj), num_point - len(obj))])
+ else:
+ select_idx = np.arange(len(obj))
+ np.random.shuffle(select_idx)
+
+ ret_inputs[0][idx * num_point:(idx + 1) * num_point] = obj[select_idx].copy()
+ ret_npts[idx] = num_point
+
+ return ret_inputs, ret_npts
+
+ def piecewise_constant(global_step, boundaries, values):
+ """substitute for tf.train.piecewise_constant:
+ https://www.tensorflow.org/api_docs/python/tf/compat/v1/train/piecewise_constant
+ global_step can be either training epoch or training step
+ """
+ if len(boundaries) != len(values) - 1:
+ raise ValueError(
+ "The length of boundaries should be 1 less than the length of values")
+
+ if global_step <= boundaries[0]:
+ return values[0]
+ elif global_step > boundaries[-1]:
+ return values[-1]
+ else:
+ for low, high, v in zip(boundaries[:-1], boundaries[1:], values[1:-1]):
+ if (global_step > low) & (global_step <= high):
+ return v
+
+ total_time, train_start = 0, time.time()
+ for step in range(MyLogger.step + 1, total_steps + 1):
+
+ ''' === Training === '''
+ start = time.time()
+ epoch = step * args.batch_size // num_train + 1
+ lr = max(args.lr * (args.lr_decay ** (epoch // args.step_size)), LEARNING_RATE_CLIP)
+ for param_group in optimizer.param_groups:
+ param_group['lr'] = lr
+ # follow the original alpha setting for ShapeNet Dataset in PCN paper:
+ alpha = piecewise_constant(step, [10000, 20000, 50000], [0.01, 0.1, 0.5, 1.0])
+ writer.add_scalar('Learning Rate', lr, step)
+ writer.add_scalar('Alpha', alpha, step)
+
+ ids, inputs, npts, gt = next(train_gen)
+ if args.dataset == 'shapenet':
+ inputs, _ = vary2fix(inputs, npts)
+
+ completer.train()
+ optimizer.zero_grad()
+ inputs = inputs.reshape(args.batch_size, args.input_pts, 3)
+ inputs, gt = torch.Tensor(inputs).transpose(2, 1).cuda(), torch.Tensor(gt).cuda()
+ pred_coarse, pred_fine = completer(inputs)
+ loss = criterion(pred_coarse, pred_fine, gt, alpha)
+ loss.backward()
+ optimizer.step()
+ total_time += time.time() - start
+ writer.add_scalar('Loss', loss, step)
+
+ if step % args.steps_print == 0:
+ MyLogger.logger.info('epoch %d step %d alpha %.2f loss %.8f time per step %.2f s' %
+ (epoch, step, alpha, loss, total_time / args.steps_print))
+ total_time = 0
+
+ ''' === Validating === '''
+ if step % args.steps_eval == 0:
+
+ with torch.no_grad():
+ completer.eval()
+ MyLogger.logger.info('Testing...')
+ num_eval_steps, eval_loss, eval_time = num_valid // args.batch_size, 0, 0
+
+ for eval_step in range(num_eval_steps):
+ start = time.time()
+ _, inputs, npts, gt = next(valid_gen)
+ if args.dataset == 'shapenet':
+ inputs, _ = vary2fix(inputs, npts)
+
+ inputs = inputs.reshape(args.batch_size, args.input_pts, 3)
+ inputs, gt = torch.Tensor(inputs).transpose(2, 1).cuda(), torch.Tensor(gt).cuda()
+
+ pred_coarse, pred_fine = completer(inputs)
+ loss = criterion(pred_coarse, pred_fine, gt, alpha)
+ eval_loss += loss
+ eval_time += time.time() - start
+
+ MyLogger.logger.info('epoch %d step %d validation loss %.8f time per step %.2f s' %
+ (epoch, step, eval_loss / num_eval_steps, eval_time / num_eval_steps))
+
+ ''' === Visualisation === '''
+ if step % args.steps_visu == 0:
+ all_pcds = [item.detach().cpu().numpy() for item in [
+ inputs.transpose(2, 1), pred_coarse, pred_fine, gt]]
+ for i in range(args.batch_size):
+ plot_path = os.path.join(MyLogger.experiment_dir, 'plots',
+ 'epoch_%d_step_%d_%s.png' % (epoch, step, ids[i]))
+ pcds = [x[i] for x in all_pcds]
+ plot_pcd_three_views(plot_path, pcds,
+ ['input', 'coarse output', 'fine output', 'ground truth'])
+
+ trained_epoch = epoch - 1
+ if (trained_epoch % args.epochs_save == 0) and (trained_epoch != 0) and \
+ not os.path.exists(os.path.join(MyLogger.checkpoints_dir,
+ 'model_epoch_%d.pth' % trained_epoch)):
+ state = {
+ 'step': step,
+ 'epoch': epoch,
+ 'model_state_dict': completer.state_dict(),
+ 'optimizer_state_dict': optimizer.state_dict(),
+ }
+ torch.save(state, os.path.join(MyLogger.checkpoints_dir,
+ "model_epoch_%d.pth" % trained_epoch))
+ MyLogger.logger.info('Model saved at %s/model_epoch_%d.pth\n'
+ % (MyLogger.checkpoints_dir, trained_epoch))
+
+ MyLogger.logger.info('Training Finished, Total Time: ' +
+ str(datetime.timedelta(seconds=time.time() - train_start)))
+
+
+if __name__ == '__main__':
+ args = parse_args()
+ main(args)
diff --git a/zoo/OcCo/OcCo_Torch/utils/train_jigsaw.py b/zoo/OcCo/OcCo_Torch/utils/train_jigsaw.py
new file mode 100644
index 0000000..dbf9504
--- /dev/null
+++ b/zoo/OcCo/OcCo_Torch/utils/train_jigsaw.py
@@ -0,0 +1,176 @@
+# Copyright (c) 2020. Hanchen Wang, hw501@cam.ac.uk
+# ref: https://github.com/AnTao97/dgcnn.pytorch/blob/master/main_semseg.py
+# ref: https://github.com/yanx27/Pointnet_Pointnet2_pytorch/blob/master/train_semseg.py
+
+import os, sys, torch, argparse, importlib, shutil
+sys.path.append('models')
+sys.path.append('utils')
+from torch.optim.lr_scheduler import CosineAnnealingLR, StepLR
+from Torch_Utility import weights_init, bn_momentum_adjust
+from ModelNetDataLoader import ModelNetJigsawDataLoader
+from torch.utils.tensorboard import SummaryWriter
+from torch.utils.data import DataLoader
+from TrainLogger import TrainLogger
+from tqdm import tqdm
+
+
+def parse_args():
+ parser = argparse.ArgumentParser('3D Point Cloud Jigsaw Puzzles')
+
+ ''' === Training === '''
+ parser.add_argument('--log_dir', type=str, help='log folder [default: ]')
+ parser.add_argument('--gpu', type=str, default='0', help='GPU [default: 0]')
+ parser.add_argument('--batch_size', type=int, default=32, help='batch size [default: 32]')
+ parser.add_argument('--epoch', default=200, type=int, help='training epochs [default: 200]')
+ parser.add_argument('--lr', default=0.0001, type=float, help='learning rate [default: 1e-4]')
+ parser.add_argument('--optimizer', type=str, default='Adam', help='optimiser [default: Adam]')
+ parser.add_argument('--momentum', type=float, default=0.9, help='SGD momentum [default: 0.9]')
+ parser.add_argument('--lr_decay', type=float, default=0.7, help='lr decay rate [default: 0.7]')
+ parser.add_argument('--bn_decay', action='store_true', help='BN Momentum Decay [default: False]')
+ parser.add_argument('--xavier_init', action='store_true', help='Xavier weight init [default: False]')
+ parser.add_argument('--scheduler', type=str, default='step', help='lr decay scheduler [default: step]')
+ parser.add_argument('--model', type=str, default='pointnet_jigsaw', help='model [default: pointnet_jigsaw]')
+ parser.add_argument('--step_size', type=int, default=20, help='decay steps for lr [default: every 20 epochs]')
+
+ ''' === Model === '''
+ parser.add_argument('--k', type=int, default=20, help='num of nearest neighbors to use [default: 20]')
+ parser.add_argument('--emb_dims', type=int, default=1024, help='dimension of embeddings [default: 1024]')
+ parser.add_argument('--num_point', type=int, default=1024, help='number of points per object [default: 1024]')
+
+ return parser.parse_args()
+
+
+def main(args):
+
+ os.environ["CUDA_DEVICE_ORDER"] = "PCI_BUS_ID"
+ os.environ["CUDA_VISIBLE_DEVICES"] = args.gpu
+
+ NUM_CLASSES = 3 ** 3
+ DATA_PATH = 'data/modelnet40_ply_hdf5_2048/jigsaw'
+ TRAIN_DATASET = ModelNetJigsawDataLoader(DATA_PATH, split='train', n_points=args.num_point, k=3)
+ TEST_DATASET = ModelNetJigsawDataLoader(DATA_PATH, split='test', n_points=args.num_point, k=3)
+ trainDataLoader = DataLoader(TRAIN_DATASET, batch_size=args.batch_size, shuffle=True, num_workers=4)
+ testDataLoader = DataLoader(TEST_DATASET, batch_size=args.batch_size, shuffle=False, num_workers=4)
+ MyLogger = TrainLogger(args, name=args.model.upper(), subfold='jigsaw')
+
+ ''' === MODEL LOADING === '''
+ MODEL = importlib.import_module(args.model)
+ shutil.copy(os.path.abspath(__file__), MyLogger.log_dir)
+ shutil.copy('./models/%s.py' % args.model, MyLogger.log_dir)
+ writer = SummaryWriter(os.path.join(MyLogger.experiment_dir, 'runs'))
+
+ # allow multiple GPUs
+ device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
+ classifier = MODEL.get_model(args=args, num_class=NUM_CLASSES, num_channel=3).to(device)
+ criterion = MODEL.get_loss().to(device)
+ classifier = torch.nn.DataParallel(classifier)
+ print('=' * 33)
+ print('Using %d GPU,' % torch.cuda.device_count(), 'Indices: %s' % args.gpu)
+ print('=' * 33)
+
+ if args.xavier_init:
+ classifier = classifier.apply(weights_init)
+ MyLogger.logger.info("Using Xavier Weight Initialisation")
+
+ if args.optimizer == 'Adam':
+ optimizer = torch.optim.Adam(
+ classifier.parameters(),
+ lr=args.lr,
+ betas=(0.9, 0.999),
+ eps=1e-08,
+ weight_decay=1e-4)
+ MyLogger.logger.info("Using Adam Optimiser")
+ else:
+ optimizer = torch.optim.SGD(
+ classifier.parameters(),
+ lr=1000 * args.lr,
+ momentum=args.momentum)
+ MyLogger.logger.info("Using SGD Optimiser")
+
+ LEARNING_RATE_CLIP = 1e-5
+ MOMENTUM_ORIGINAL = 0.1
+ MOMENTUM_DECAY = 0.5
+ MOMENTUM_DECAY_STEP = args.step_size
+
+ if args.scheduler == 'cos':
+ scheduler = CosineAnnealingLR(optimizer, T_max=args.epoch, eta_min=1e-3)
+ else:
+ scheduler = StepLR(optimizer, step_size=args.step_size, gamma=0.7)
+
+ for epoch in range(MyLogger.epoch, args.epoch + 1):
+
+ ''' === Training === '''
+ MyLogger.epoch_init(training=True)
+
+ for points, target in tqdm(trainDataLoader, total=len(trainDataLoader), smoothing=0.9):
+ points, target = points.transpose(2, 1).float().cuda(), target.view(-1, 1)[:, 0].long().cuda()
+ classifier.train()
+ optimizer.zero_grad()
+
+ if args.model == 'pointnet_jigsaw':
+ pred, trans_feat = classifier(points)
+ pred = pred.contiguous().view(-1, NUM_CLASSES)
+ loss = criterion(pred, target, trans_feat)
+ else:
+ pred = classifier(points)
+ pred = pred.contiguous().view(-1, NUM_CLASSES)
+ loss = criterion(pred, target)
+
+ loss.backward()
+ optimizer.step()
+ # pdb.set_trace()
+ MyLogger.step_update(pred.data.max(1)[1].cpu().numpy(),
+ target.long().cpu().numpy(),
+ loss.cpu().detach().numpy())
+ MyLogger.epoch_summary(writer=writer, training=True)
+
+ ''' === Evaluation === '''
+ with torch.no_grad():
+ classifier.eval()
+ MyLogger.epoch_init(training=False)
+
+ for points, target in tqdm(testDataLoader, total=len(testDataLoader), smoothing=0.9):
+ points, target = points.transpose(2, 1).float().cuda(), target.view(-1, 1)[:, 0].long().cuda()
+ if args.model == 'pointnet_jigsaw':
+ pred, trans_feat = classifier(points)
+ pred = pred.contiguous().view(-1, NUM_CLASSES)
+ loss = criterion(pred, target, trans_feat)
+ else:
+ pred = classifier(points)
+ pred = pred.contiguous().view(-1, NUM_CLASSES)
+ loss = criterion(pred, target)
+ MyLogger.step_update(pred.data.max(1)[1].cpu().numpy(),
+ target.long().cpu().numpy(),
+ loss.cpu().detach().numpy())
+ MyLogger.epoch_summary(writer=writer, training=False)
+
+ if MyLogger.save_model:
+ state = {
+ 'step': MyLogger.step,
+ 'epoch': MyLogger.best_instance_epoch,
+ 'instance_acc': MyLogger.best_instance_acc,
+ 'model_state_dict': classifier.state_dict(),
+ 'optimizer_state_dict': optimizer.state_dict(),
+ }
+ torch.save(state, MyLogger.savepath)
+
+ scheduler.step()
+ if args.scheduler == 'step':
+ for param_group in optimizer.param_groups:
+ if optimizer.param_groups[0]['lr'] < LEARNING_RATE_CLIP:
+ param_group['lr'] = LEARNING_RATE_CLIP
+ if args.bn_decay:
+ momentum = MOMENTUM_ORIGINAL * (MOMENTUM_DECAY ** (epoch // MOMENTUM_DECAY_STEP))
+ if momentum < 0.01:
+ momentum = 0.01
+ print('BN momentum updated to: %f' % momentum)
+ classifier = classifier.apply(lambda x: bn_momentum_adjust(x, momentum))
+
+ MyLogger.train_summary()
+
+
+if __name__ == '__main__':
+
+ args = parse_args()
+ main(args)
+
diff --git a/zoo/OcCo/OcCo_Torch/utils/train_semseg.py b/zoo/OcCo/OcCo_Torch/utils/train_semseg.py
new file mode 100644
index 0000000..d1bf36c
--- /dev/null
+++ b/zoo/OcCo/OcCo_Torch/utils/train_semseg.py
@@ -0,0 +1,225 @@
+# Copyright (c) 2020. Hanchen Wang, hw501@cam.ac.uk
+# ref: https://github.com/charlesq34/pointnet/blob/master/sem_seg/train.py
+# ref: https://github.com/AnTao97/dgcnn.pytorch/blob/master/main_semseg.py
+# ref: https://github.com/yanx27/Pointnet_Pointnet2_pytorch/blob/master/train_semseg.py
+
+import os, sys, torch, shutil, argparse, importlib
+sys.path.append('utils')
+sys.path.append('models')
+from Torch_Utility import copy_parameters, weights_init, bn_momentum_adjust
+from torch.optim.lr_scheduler import CosineAnnealingLR, StepLR
+from torch.utils.tensorboard import SummaryWriter
+from S3DISDataLoader import S3DISDataset_HDF5
+from torch.utils.data import DataLoader
+from TrainLogger import TrainLogger
+from tqdm import tqdm
+
+
+classes = ['ceiling', 'floor', 'wall', 'beam', 'column',
+ 'window', 'door', 'table', 'chair', 'sofa',
+ 'bookcase', 'board', 'clutter']
+class2label = {cls: i for i, cls in enumerate(classes)}
+seg_classes = class2label
+seg_label_to_cat = {}
+for i, cat in enumerate(seg_classes.keys()):
+ seg_label_to_cat[i] = cat
+
+
+def parse_args():
+ parser = argparse.ArgumentParser(description='Point Cloud Semantic Segmentation')
+
+ parser.add_argument('--log_dir', type=str, help='log path [default: ]')
+ parser.add_argument('--gpu', type=str, default='0', help='GPU [default: 0]')
+ parser.add_argument('--mode', type=str, default='train', help='train or test')
+ parser.add_argument('--batch_size', type=int, default=24, help='batch size [default: 24]')
+ parser.add_argument('--test_area', type=int, default=5, help='test area, 1-6 [default: 5]')
+ parser.add_argument('--epoch', default=100, type=int, help='training epochs [default: 100]')
+ parser.add_argument('--lr', type=float, default=0.001, help='learning rate [default: 0.001]')
+ parser.add_argument('--momentum', type=float, default=0.9, help='SGD momentum [default: 0.9]')
+ parser.add_argument('--lr_decay', type=float, default=0.5, help='lr decay rate [default: 0.5]')
+ parser.add_argument('--restore', action='store_true', help='restore the weights [default: False]')
+ parser.add_argument('--restore_path', type=str, help='path to pre-saved model weights [default: ]')
+ parser.add_argument('--dropout', type=float, default=0.5, help='dropout rate in FCs [default: 0.5]')
+ parser.add_argument('--bn_decay', action='store_true', help='use BN Momentum Decay [default: False]')
+ parser.add_argument('--xavier_init', action='store_true', help='Xavier weight init [default: False]')
+ parser.add_argument('--emb_dims', type=int, default=1024, help='embedding dimensions [default: 1024]')
+ parser.add_argument('--k', type=int, default=20, help='num of nearest neighbors to use [default: 20]')
+ parser.add_argument('--step_size', type=int, default=40, help='lr decay steps [default: every 40 epochs]')
+ parser.add_argument('--scheduler', type=str, default='cos', help='lr decay scheduler [default: cos, step]')
+ parser.add_argument('--model', type=str, default='pointnet_semseg', help='model [default: pointnet_semseg]')
+ parser.add_argument('--optimizer', type=str, default='adam', help='optimiser [default: adam, otherwise sgd]')
+
+ return parser.parse_args()
+
+
+def main(args):
+
+ os.environ["CUDA_DEVICE_ORDER"] = "PCI_BUS_ID"
+ os.environ["CUDA_VISIBLE_DEVICES"] = args.gpu
+
+ root = 'data/indoor3d_sem_seg_hdf5_data'
+ NUM_CLASSES = len(seg_label_to_cat)
+
+ TRAIN_DATASET = S3DISDataset_HDF5(root=root, split='train', test_area=args.test_area)
+ TEST_DATASET = S3DISDataset_HDF5(root=root, split='test', test_area=args.test_area)
+ trainDataLoader = DataLoader(TRAIN_DATASET, batch_size=args.batch_size, shuffle=True, num_workers=4)
+ testDataLoader = DataLoader(TEST_DATASET, batch_size=args.batch_size, shuffle=False, num_workers=4)
+
+ MyLogger = TrainLogger(args, name=args.model.upper(), subfold='semseg',
+ cls2name=class2label, filename=args.mode + '_log')
+ MyLogger.logger.info("The number of training data is: %d" % len(TRAIN_DATASET))
+ MyLogger.logger.info("The number of testing data is: %d" % len(TEST_DATASET))
+
+ ''' === Model Loading === '''
+ MODEL = importlib.import_module(args.model)
+ shutil.copy(os.path.abspath(__file__), MyLogger.log_dir)
+ shutil.copy('./models/%s.py' % args.model, MyLogger.log_dir)
+ writer = SummaryWriter(os.path.join(MyLogger.experiment_dir, 'runs'))
+
+ device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
+ classifier = MODEL.get_model(num_class=NUM_CLASSES, num_channel=9, args=args).to(device)
+ criterion = MODEL.get_loss().to(device)
+ classifier = torch.nn.DataParallel(classifier)
+ print('=' * 27)
+ print('Using %d GPU,' % torch.cuda.device_count(), 'Indices: %s' % args.gpu)
+ print('=' * 27)
+
+ if args.restore:
+ checkpoint = torch.load(args.restore_path)
+ classifier = copy_parameters(classifier, checkpoint, verbose=True)
+ MyLogger.logger.info('Use pre-trained weights from %s' % args.restore_path)
+ else:
+ MyLogger.logger.info('No pre-trained weights, start training from scratch...')
+ if args.xavier_init:
+ classifier = classifier.apply(weights_init)
+ MyLogger.logger.info("Using Xavier weight initialisation")
+
+ if args.optimizer == 'adam':
+ optimizer = torch.optim.Adam(
+ classifier.parameters(),
+ lr=args.lr,
+ betas=(0.9, 0.999),
+ eps=1e-08,
+ weight_decay=1e-4)
+ MyLogger.logger.info("Using Adam optimiser")
+ else:
+ optimizer = torch.optim.SGD(
+ classifier.parameters(),
+ lr=args.lr * 100,
+ momentum=args.momentum)
+ MyLogger.logger.info("Using SGD optimiser")
+ # using the similar lr decay setting from
+ # https://github.com/charlesq34/pointnet/blob/master/sem_seg/train.py
+ # scheduler = torch.optim.lr_scheduler.StepLR(optimizer, step_size=40, gamma=0.5)
+
+ if args.scheduler == 'cos':
+ scheduler = CosineAnnealingLR(optimizer, T_max=args.epoch, eta_min=1e-3)
+ else:
+ scheduler = StepLR(optimizer, step_size=args.step_size, gamma=args.lr_decay)
+
+ LEARNING_RATE_CLIP = 1e-5
+ MOMENTUM_ORIGINAL = 0.1
+ MOMENTUM_DECAY = 0.5
+ MOMENTUM_DECAY_STEP = args.step_size
+
+ ''' === Testing then Exit === '''
+ if args.mode == 'test':
+ with torch.no_grad():
+ classifier.eval()
+ MyLogger.epoch_init(training=False)
+
+ for points, target in tqdm(testDataLoader, total=len(testDataLoader), smoothing=0.9):
+ points, target = points.transpose(2, 1).float().cuda(), target.view(-1, 1)[:, 0].long().cuda()
+ if args.model == 'pointnet_semseg':
+ seg_pred, trans_feat = classifier(points)
+ seg_pred = seg_pred.contiguous().view(-1, NUM_CLASSES)
+ loss = criterion(seg_pred, target, trans_feat)
+ else:
+ seg_pred = classifier(points)
+ seg_pred = seg_pred.contiguous().view(-1, NUM_CLASSES)
+ loss = criterion(seg_pred, target)
+ MyLogger.step_update(seg_pred.data.max(1)[1].cpu().numpy(),
+ target.long().cpu().numpy(),
+ loss.cpu().detach().numpy())
+
+ MyLogger.epoch_summary(writer=writer, training=False, mode='semseg')
+ sys.exit("Test Finished")
+
+ for epoch in range(MyLogger.epoch, args.epoch + 1):
+
+ ''' === Training === '''
+ # scheduler.step()
+ MyLogger.epoch_init()
+
+ for points, target in tqdm(trainDataLoader, total=len(trainDataLoader), smoothing=0.9):
+ writer.add_scalar('learning rate', scheduler.get_lr()[-1], MyLogger.step)
+ points, target = points.float().transpose(2, 1).cuda(), target.view(-1, 1)[:, 0].long().cuda()
+
+ classifier.train()
+ optimizer.zero_grad()
+ # pdb.set_trace()
+ if args.model == 'pointnet_semseg':
+ seg_pred, trans_feat = classifier(points)
+ seg_pred = seg_pred.contiguous().view(-1, NUM_CLASSES)
+ loss = criterion(seg_pred, target, trans_feat)
+ else:
+ seg_pred = classifier(points)
+ seg_pred = seg_pred.contiguous().view(-1, NUM_CLASSES)
+ loss = criterion(seg_pred, target)
+
+ loss.backward()
+ optimizer.step()
+
+ MyLogger.step_update(seg_pred.data.max(1)[1].cpu().numpy(),
+ target.long().cpu().numpy(),
+ loss.cpu().detach().numpy())
+ MyLogger.epoch_summary(writer=writer, training=True, mode='semseg')
+
+ '''=== Evaluating ==='''
+ with torch.no_grad():
+ classifier.eval()
+ MyLogger.epoch_init(training=False)
+
+ for points, target in tqdm(testDataLoader, total=len(testDataLoader), smoothing=0.9):
+ points, target = points.transpose(2, 1).float().cuda(), target.view(-1, 1)[:, 0].long().cuda()
+ if args.model == 'pointnet_semseg':
+ seg_pred, trans_feat = classifier(points)
+ seg_pred = seg_pred.contiguous().view(-1, NUM_CLASSES)
+ loss = criterion(seg_pred, target, trans_feat)
+ else:
+ seg_pred = classifier(points)
+ seg_pred = seg_pred.contiguous().view(-1, NUM_CLASSES)
+ loss = criterion(seg_pred, target)
+ MyLogger.step_update(seg_pred.data.max(1)[1].cpu().numpy(),
+ target.long().cpu().numpy(),
+ loss.cpu().detach().numpy())
+
+ MyLogger.epoch_summary(writer=writer, training=False, mode='semseg')
+ if MyLogger.save_model:
+ state = {
+ 'step': MyLogger.step,
+ 'miou': MyLogger.best_miou,
+ 'epoch': MyLogger.best_miou_epoch,
+ 'model_state_dict': classifier.state_dict(),
+ 'optimizer_state_dict': optimizer.state_dict(),
+ }
+ torch.save(state, MyLogger.savepath)
+
+ scheduler.step()
+ if args.scheduler == 'step':
+ for param_group in optimizer.param_groups:
+ if optimizer.param_groups[0]['lr'] < LEARNING_RATE_CLIP:
+ param_group['lr'] = LEARNING_RATE_CLIP
+ if args.bn_decay:
+ momentum = MOMENTUM_ORIGINAL * (MOMENTUM_DECAY ** (epoch // MOMENTUM_DECAY_STEP))
+ if momentum < 0.01:
+ momentum = 0.01
+ print('BN momentum updated to: %f' % momentum)
+ classifier = classifier.apply(lambda x: bn_momentum_adjust(x, momentum))
+
+ MyLogger.train_summary(mode='semseg')
+
+
+if __name__ == '__main__':
+ args = parse_args()
+ main(args)
diff --git a/zoo/OcCo/OcCo_Torch/utils/train_svm.py b/zoo/OcCo/OcCo_Torch/utils/train_svm.py
new file mode 100644
index 0000000..79340a8
--- /dev/null
+++ b/zoo/OcCo/OcCo_Torch/utils/train_svm.py
@@ -0,0 +1,115 @@
+# Copyright (c) 2020. Hanchen Wang, hw501@cam.ac.uk
+# Ref: https://scikit-learn.org/stable/modules/svm.html
+# Ref: https://scikit-learn.org/stable/modules/classes.html#module-sklearn.model_selection
+
+import os, sys, torch, argparse, datetime, importlib, numpy as np
+sys.path.append('utils')
+sys.path.append('models')
+from sklearn.model_selection import GridSearchCV, RandomizedSearchCV
+from ModelNetDataLoader import General_CLSDataLoader_HDF5
+from Torch_Utility import copy_parameters
+# from sklearn.preprocessing import scale
+from torch.utils.data import DataLoader
+from Dataset_Loc import Dataset_Loc
+from sklearn import svm, metrics
+from tqdm import tqdm
+
+
+def parse_args():
+ parser = argparse.ArgumentParser('SVM on Point Cloud Classification')
+
+ ''' === Network Model === '''
+ parser.add_argument('--gpu', type=str, default='0', help='GPU [default: 0]')
+ parser.add_argument('--model', default='pcn_util', help='model [default: pcn_util]')
+ parser.add_argument('--batch_size', type=int, default=24, help='batch size [default: 24]')
+ parser.add_argument('--restore_path', type=str, help="path to pre-trained weights [default: None]")
+ parser.add_argument('--grid_search', action='store_true', help='opt parameters via Grid Search [default: False]')
+
+ ''' === Dataset === '''
+ parser.add_argument('--partial', action='store_true', help='partial objects [default: False]')
+ parser.add_argument('--bn', action='store_true', help='with background noise [default: False]')
+ parser.add_argument('--dataset', type=str, default='modelnet40', help='dataset [default: modelnet40]')
+ parser.add_argument('--fname', type=str, default="", help='filename, used in ScanObjectNN [default: ]')
+
+ return parser.parse_args()
+
+
+if __name__ == "__main__":
+ args = parse_args()
+
+ os.environ["CUDA_DEVICE_ORDER"] = "PCI_BUS_ID"
+ os.environ["CUDA_VISIBLE_DEVICES"] = args.gpu
+
+ _, TRAIN_FILES, TEST_FILES = Dataset_Loc(dataset=args.dataset, fname=args.fname,
+ partial=args.partial, bn=args.bn)
+ TRAIN_DATASET = General_CLSDataLoader_HDF5(file_list=TRAIN_FILES)
+ TEST_DATASET = General_CLSDataLoader_HDF5(file_list=TEST_FILES)
+ trainDataLoader = DataLoader(TRAIN_DATASET, batch_size=args.batch_size, shuffle=True, num_workers=4)
+ testDataLoader = DataLoader(TEST_DATASET, batch_size=args.batch_size, shuffle=Falses, num_workers=4)
+
+ MODEL = importlib.import_module(args.model)
+ device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
+ encoder = MODEL.encoder(args=args, num_channel=3).to(device)
+ encoder = torch.nn.DataParallel(encoder)
+
+ checkpoint = torch.load(args.restore_path)
+ encoder = copy_parameters(encoder, checkpoint, verbose=True)
+
+ X_train, y_train, X_test, y_test = [], [], [], []
+ with torch.no_grad():
+ encoder.eval()
+
+ for points, target in tqdm(trainDataLoader, total=len(trainDataLoader), smoothing=0.9):
+ points, target = points.float().transpose(2, 1).cuda(), target.long().cuda()
+ feats = encoder(points)
+ X_train.append(feats.cpu().numpy())
+ y_train.append(target.cpu().numpy())
+
+ for points, target in tqdm(testDataLoader, total=len(testDataLoader), smoothing=0.9):
+ points, target = points.float().transpose(2, 1).cuda(), target.long().cuda()
+ feats = encoder(points)
+ X_test.append(feats.cpu().numpy())
+ y_test.append(target.cpu().numpy())
+
+ X_train, y_train = np.concatenate(X_train), np.concatenate(y_train)
+ X_test, y_test = np.concatenate(X_test), np.concatenate(y_test)
+
+ # Optional: Standardize the Feature Space
+ # X_train, X_test = scale(X_train), scale(X_test)
+
+ ''' === Simple Trial === '''
+ linear_svm = svm.SVC(kernel='linear')
+ linear_svm.fit(X_train, y_train)
+ y_pred = linear_svm.predict(X_test)
+ print("\n", "Simple Linear SVC accuracy:", metrics.accuracy_score(y_test, y_pred), "\n")
+
+ rbf_svm = svm.SVC(kernel='rbf')
+ rbf_svm.fit(X_train, y_train)
+ y_pred = rbf_svm.predict(X_test)
+ print("Simple RBF SVC accuracy:", metrics.accuracy_score(y_test, y_pred), "\n")
+
+ ''' === Grid Search for SVM with RBF Kernel === '''
+ if not args.grid_search:
+ sys.exit()
+ print("Now we use Grid Search to opt the parameters for SVM RBF kernel")
+ # [1e-3, 5e-3, 1e-2, ..., 5e1]
+ gamma_range = np.outer(np.logspace(-3, 1, 5), np.array([1, 5])).flatten()
+ # [1e-1, 5e-1, 1e0, ..., 5e1]
+ C_range = np.outer(np.logspace(-1, 1, 3), np.array([1, 5])).flatten()
+ parameters = {'kernel': ['rbf'], 'C': C_range, 'gamma': gamma_range}
+
+ svm_clsf = svm.SVC()
+ grid_clsf = GridSearchCV(estimator=svm_clsf, param_grid=parameters, n_jobs=8, verbose=1)
+
+ start_time = datetime.datetime.now()
+ print('Start Param Searching at {}'.format(str(start_time)))
+ grid_clsf.fit(X_train, y_train)
+ print('Elapsed time, param searching {}'.format(str(datetime.datetime.now() - start_time)))
+ sorted(grid_clsf.cv_results_.keys())
+
+ # scores = grid_clsf.cv_results_['mean_test_score'].reshape(len(C_range), len(gamma_range))
+ y_pred = grid_clsf.best_estimator_.predict(X_test)
+ print("\n\n")
+ print("="*37)
+ print("Best Params via Grid Search Cross Validation on Train Split is: ", grid_clsf.best_params_)
+ print("Best Model's Accuracy on Test Dataset: {}".format(metrics.accuracy_score(y_test, y_pred)))
diff --git a/zoo/OcCo/assets/cls.png b/zoo/OcCo/assets/cls.png
new file mode 100644
index 0000000..11fe5a9
Binary files /dev/null and b/zoo/OcCo/assets/cls.png differ
diff --git a/zoo/OcCo/assets/data_overview_new.png b/zoo/OcCo/assets/data_overview_new.png
new file mode 100644
index 0000000..b6733a3
Binary files /dev/null and b/zoo/OcCo/assets/data_overview_new.png differ
diff --git a/zoo/OcCo/assets/dgcnn_combine.png b/zoo/OcCo/assets/dgcnn_combine.png
new file mode 100644
index 0000000..724458f
Binary files /dev/null and b/zoo/OcCo/assets/dgcnn_combine.png differ
diff --git a/zoo/OcCo/assets/eff.png b/zoo/OcCo/assets/eff.png
new file mode 100644
index 0000000..ddc2ec4
Binary files /dev/null and b/zoo/OcCo/assets/eff.png differ
diff --git a/zoo/OcCo/assets/failure_combine.png b/zoo/OcCo/assets/failure_combine.png
new file mode 100644
index 0000000..3a854ab
Binary files /dev/null and b/zoo/OcCo/assets/failure_combine.png differ
diff --git a/zoo/OcCo/assets/lr.png b/zoo/OcCo/assets/lr.png
new file mode 100644
index 0000000..f59bd97
Binary files /dev/null and b/zoo/OcCo/assets/lr.png differ
diff --git a/zoo/OcCo/assets/partseg.png b/zoo/OcCo/assets/partseg.png
new file mode 100644
index 0000000..444ffb2
Binary files /dev/null and b/zoo/OcCo/assets/partseg.png differ
diff --git a/zoo/OcCo/assets/pcn_combine.png b/zoo/OcCo/assets/pcn_combine.png
new file mode 100644
index 0000000..eef1a80
Binary files /dev/null and b/zoo/OcCo/assets/pcn_combine.png differ
diff --git a/zoo/OcCo/assets/pointnet_combine.png b/zoo/OcCo/assets/pointnet_combine.png
new file mode 100644
index 0000000..6b92a8e
Binary files /dev/null and b/zoo/OcCo/assets/pointnet_combine.png differ
diff --git a/zoo/OcCo/assets/semseg.png b/zoo/OcCo/assets/semseg.png
new file mode 100644
index 0000000..35e9c11
Binary files /dev/null and b/zoo/OcCo/assets/semseg.png differ
diff --git a/zoo/OcCo/assets/svm.png b/zoo/OcCo/assets/svm.png
new file mode 100644
index 0000000..3a7da6a
Binary files /dev/null and b/zoo/OcCo/assets/svm.png differ
diff --git a/zoo/OcCo/assets/teaser.png b/zoo/OcCo/assets/teaser.png
new file mode 100644
index 0000000..6c41b0c
Binary files /dev/null and b/zoo/OcCo/assets/teaser.png differ
diff --git a/zoo/OcCo/assets/tsne.png b/zoo/OcCo/assets/tsne.png
new file mode 100644
index 0000000..eec3866
Binary files /dev/null and b/zoo/OcCo/assets/tsne.png differ
diff --git a/zoo/OcCo/readme.md b/zoo/OcCo/readme.md
new file mode 100644
index 0000000..0667e32
--- /dev/null
+++ b/zoo/OcCo/readme.md
@@ -0,0 +1,463 @@
+## OcCo: Unsupervised Point Cloud Pre-training via Occlusion Completion
+This repository is the official implementation of paper: "Unsupervised Point Cloud Pre-training via Occlusion Completion"
+
+[[Paper](https://arxiv.org/abs/2010.01089)] [[Project Page](https://hansen7.github.io/OcCo/)]
+
+### Intro
+
+
+
+In this work, we train a completion model that learns how to reconstruct the occluded points, given the partial observations. In this way, our method learns a pre-trained encoder that can identify the visual constraints inherently embedded in real-world point clouds.
+
+We call our method **Occlusion Completion (OcCo)**. We demonstrate that OcCo learns representations that: improve generalization on downstream tasks over prior pre-training methods, transfer to different datasets, reduce training time, and improve labeled sample efficiency.
+
+
+### Citation
+Our paper is preprinted on arxiv:
+
+```
+@inproceedings{OcCo,
+ title = {Unsupervised Point Cloud Pre-Training via Occlusion Completion},
+ author = {Hanchen Wang and Qi Liu and Xiangyu Yue and Joan Lasenby and Matthew J. Kusner},
+ year = 2021,
+ booktitle = {International Conference on Computer Vision, ICCV}
+}
+```
+
+### Usage
+
+We provide codes in both PyTorch (1.3): OcCo_Torch and TensorFlow (1.13-1.15): OcCo_TF. We also provide with docker configuration docker. Our recommended development environment PyTorch + docker, the following descriptions are based on OcCo_Torch, we refer the readme in the OcCo_TF for the details of TensorFlow implementation.
+
+
+
+#### 1) Prerequisite
+
+##### Docker
+
+In the docker folder, we provide the build, configuration and launch scripts:
+
+```
+docker
+| - Dockerfile_Torch # configuration
+| - build_docker_torch.sh # scripts for building up from the docker images
+| - launch_docker_torch.sh # launch from the built image
+| - .dockerignore # ignore the log and data folder while building up
+```
+
+which can be automatically set up as following:
+
+```bash
+# build up from docker images
+cd OcCo_Torch/docker
+sh build_docker_torch.sh
+
+# launch the docker image, conduct completion/classification/segmentation experiments
+cd OcCo_Torch/docker
+sh launch_docker_torch.sh
+```
+
+##### Non-Docker Setup
+
+Just go with `pip install -r Requirements_Torch.txt` with the `PyTorch 1.3.0, CUDA 10.1, CUDNN 7` (otherwise you may encounter errors while building the C++ extension chamfer_distance for calculating the Chamfer Distance), my development environment besides docker is `Ubuntu 16.04.6 LTS, gcc/g++ 5.4.0, cuda10.1, CUDNN 7`.
+
+
+
+#### 2) Pre-Training via Occlusion Completion (OcCo)
+
+##### Data Usage:
+
+For the details in the data setup, please see data/readme.md.
+
+##### Training Scripts:
+
+We unify the training of all three models (`PointNet`, `PCN` and `DGCNN`) in train_completion.py as well as the bash templates, see bash_template/train_completion_template.sh for details:
+
+```bash
+#!/usr/bin/env bash
+
+cd ../
+
+# train pointnet-occo model on ModelNet, from scratch
+python train_completion.py \
+ --gpu 0,1 \
+ --dataset modelnet \
+ --model pointnet_occo \
+ --log_dir modelnet_pointnet_vanilla ;
+
+# train dgcnn-occo model on ShapeNet, from scratch
+python train_completion.py \
+ --gpu 0,1 \
+ --batch_size 16 \
+ --dataset shapenet \
+ --model dgcnn_occo \
+ --log_dir shapenet_dgcnn_vanilla ;
+```
+
+##### Pre-Trained Weights
+
+We will provide the OcCo pre-trained models for all the three models [here](https://drive.google.com/drive/folders/15H1JH9oTfp_sVkj9nwgnThZHRI9ef2bT?usp=sharing), you can use them for visualization of completing self-occluded point cloud, fine tuning on classification, scene semantic and object part segmentation tasks.
+
+
+
+#### 3) Sanity Check on Pre-Training
+
+We use single channel values as well as the t-SNE for dimensionality reduction to visualize the learned object embeddings on objects from the ShapeNet10, while the encoders are pre-trained on the ModelNet40 dataset, see utils/TSNE_Visu.py for details.
+
+We also train a Support Vector Machine (SVM) based on the learned embeddings object recognition. It is in train_svm.py. We also provide the bash template for this, see bash_template/train_svm_template.sh for details:
+
+```bash
+#!/usr/bin/env bash
+
+cd ../
+
+# fit a simple linear SVM on ModelNet40 with OcCo PCN
+python train_svm.py \
+ --gpu 0 \
+ --model pcn_util \
+ --dataset modelnet40 \
+ --restore_path log/completion/modelnet_pcn_vanilla/checkpoints/best_model.pth ;
+
+# grid search the best svm parameters with rbf kernel on ScanObjectNN(OBJ_BG) with OcCo DGCNN
+python train_svm.py \
+ --gpu 0 \
+ --grid_search \
+ --batch_size 8 \
+ --model dgcnn_util \
+ --dataset scanobjectnn \
+ --bn \
+ --restore_path log/completion/modelnet_dgcnn_vanilla/checkpoints/best_model.pth ;
+```
+
+
+
+#### 4) Fine Tuning Task - Classification
+
+##### Data Usage:
+
+For the details in the data setup, please see data/readme.md.
+
+##### Training/Testing Scripts:
+
+We unify the training and testing of all three models (`PointNet`, `PCN` and `DGCNN`) in train_cls.py. We also provide the bash template for training each models from scratch, JigSaw/OcCo pre-trained checkpoints, see bash_template/train_cls_template.sh for details:
+
+```bash
+#!/usr/bin/env bash
+
+cd ../
+
+# training pointnet on ModelNet40, from scratch
+python train_cls.py \
+ --gpu 0 \
+ --model pointnet_cls \
+ --dataset modelnet40 \
+ --log_dir modelnet40_pointnet_scratch ;
+
+# fine tuning pcn on ScanNet10, using jigsaw pre-trained checkpoints
+python train_cls.py \
+ --gpu 0 \
+ --model pcn_cls \
+ --dataset scannet10 \
+ --log_dir scannet10_pcn_jigsaw \
+ --restore \
+ --restore_path log/completion/modelnet_pcn_vanilla/checkpoints/best_model.pth ;
+
+# fine tuning dgcnn on ScanObjectNN(OBJ_BG), using jigsaw pre-trained checkpoints
+python train_cls.py \
+ --gpu 0,1 \
+ --epoch 250 \
+ --use_sgd \
+ --scheduler cos \
+ --model dgcnn_cls \
+ --dataset scanobjectnn \
+ --bn \
+ --log_dir scanobjectnn_dgcnn_occo \
+ --restore \
+ --restore_path log/completion/modelnet_dgcnn_vanilla/checkpoints/best_model.pth ;
+
+# test pointnet on ModelNet40 from pre-trained checkpoints
+python train_cls.py \
+ --gpu 1 \
+ --mode test \
+ --model pointnet_cls \
+ --dataset modelnet40 \
+ --log_dir modelnet40_pointnet_scratch \
+ --restore \
+ --restore_path log/cls/modelnet40_pointnet_scratch/checkpoints/best_model.pth ;
+```
+
+
+
+#### 5) Fine Tuning Task - Semantic Segmentation
+
+##### Data Usage:
+
+For the details in the data setup, please see data/readme.md.
+
+##### Training/Testing Scripts:
+
+We unify the training and testing of all three models (PointNet, PCN and DGCNN) in train_semseg.py. We also provide the bash template for training each models from scratch, JigSaw/OcCo pre-trained checkpoints, see bash_template/train_semseg_template.sh for details:
+
+```bash
+#!/usr/bin/env bash
+
+cd ../
+
+# train pointnet_semseg on 6-fold cv of S3DIS, from scratch
+for area in $(seq 1 1 6)
+do
+python train_semseg.py \
+ --gpu 0,1 \
+ --model pointnet_semseg \
+ --bn_decay \
+ --xavier_init \
+ --test_area ${area} \
+ --scheduler step \
+ --log_dir pointnet_area${area}_scratch ;
+done
+
+# fine tune pcn_semseg on 6-fold cv of S3DIS, using jigsaw pre-trained weights
+for area in $(seq 1 1 6)
+do
+python train_semseg.py \
+ --gpu 0,1 \
+ --model pcn_semseg \
+ --bn_decay \
+ --test_area ${area} \
+ --log_dir pcn_area${area}_jigsaw \
+ --restore \
+ --restore_path log/jigsaw/modelnet_pcn_vanilla/checkpoints/best_model.pth ;
+done
+
+# fine tune dgcnn_semseg on 6-fold cv of S3DIS, using occo pre-trained weights
+for area in $(seq 1 1 6)
+do
+python train_semseg.py \
+ --gpu 0,1 \
+ --test_area ${area} \
+ --optimizer sgd \
+ --scheduler cos \
+ --model dgcnn_semseg \
+ --log_dir dgcnn_area${area}_occo \
+ --restore \
+ --restore_path log/completion/modelnet_dgcnn_vanilla/checkpoints/best_model.pth ;
+done
+
+# test pointnet_semseg on 6-fold cv of S3DIS, from saved checkpoints
+for area in $(seq 1 1 6)
+do
+python train_semseg.py \
+ --gpu 0,1 \
+ --mode test \
+ --model pointnet_semseg \
+ --test_area ${area} \
+ --scheduler step \
+ --log_dir pointnet_area${area}_scratch \
+ --restore \
+ --restore_path log/semseg/pointnet_area${area}_scratch/checkpoints/best_model.pth ;
+done
+```
+
+
+
+##### Visualization:
+
+We recommended using relevant code snippets in [RandLA-Net](https://github.com/QingyongHu/RandLA-Net) for visualization.
+
+
+
+#### 6) Fine Tuning Task - Part Segmentation
+
+##### Data Usage:
+
+For the details in the data setup, please see data/readme.md.
+
+##### Training/Testing Scripts:
+
+We unify the training and testing of all three models (PointNet, PCN and DGCNN) in train_partseg.py. We also provide the bash template for training each models from scratch, JigSaw/OcCo pre-trained checkpoints, see bash_template/train_partseg_template.sh for details:
+
+```bash
+#!/usr/bin/env bash
+
+cd ../
+
+# training pointnet on ShapeNetPart, from scratch
+python train_partseg.py \
+ --gpu 0 \
+ --normal \
+ --bn_decay \
+ --xavier_init \
+ --model pointnet_partseg \
+ --log_dir pointnet_scratch ;
+
+
+# fine tuning pcn on ShapeNetPart, using jigsaw pre-trained checkpoints
+python train_partseg.py \
+ --gpu 0 \
+ --normal \
+ --bn_decay \
+ --xavier_init \
+ --model pcn_partseg \
+ --log_dir pcn_jigsaw \
+ --restore \
+ --restore_path log/jigsaw/modelnet_pcn_vanilla/checkpoints/best_model.pth ;
+
+
+# fine tuning dgcnn on ShapeNetPart, using occo pre-trained checkpoints
+python train_partseg.py \
+ --gpu 0,1 \
+ --normal \
+ --use_sgd \
+ --xavier_init \
+ --scheduler cos \
+ --model dgcnn_partseg \
+ --log_dir dgcnn_occo \
+ --restore \
+ --restore_path log/completion/modelnet_dgcnn_vanilla/checkpoints/best_model.pth ;
+
+
+# test fine tuned pointnet on ShapeNetPart, using multiple votes
+python train_partseg.py \
+ --gpu 1 \
+ --epoch 1 \
+ --mode test \
+ --num_votes 3 \
+ --model pointnet_partseg \
+ --log_dir pointnet_scratch \
+ --restore \
+ --restore_path log/partseg/pointnet_occo/checkpoints/best_model.pth ;
+```
+
+
+
+#### 6) OcCo Data Generation (Create Your Own Dataset for OcCo Pre-Training)
+
+For the details in the self-occluded point cloud generation, please see render/readme.md.
+
+
+
+#### 7) Just Completion (Complete Your Own Data with Pre-Trained Model)
+
+You can use it for completing your occluded point cloud data with our provided OcCo checkpoints.
+
+
+
+#### 8) Jigsaw Puzzle
+
+We also provide our implementation (developed from scratch) on pre-training point cloud models via solving 3d jigsaw puzzles tasks as well as data generation, the method is described in this [paper](https://papers.nips.cc/paper/9455-self-supervised-deep-learning-on-point-clouds-by-reconstructing-space.pdf), while the authors did not reprocess to our code request. The details of our implementation is reported in our paper appendix.
+
+For the details of our implementation, please refer to description in the appendix of our paper and relevant code snippets, i.e., train_jigsaw.py, utils/3DPC_Data_Gen.py and train_jigsaw_template.sh.
+
+
+
+### Results
+
+##### Generated Dataset:
+
+
+
+
+
+##### Completed Occluded Point Cloud:
+
+-- PointNet:
+
+
+
+
+
+
+-- PCN:
+
+
+
+
+-- DGCNN:
+
+
+
+
+
+-- Failure Examples:
+
+
+
+
+
+##### Visualization of learned features:
+
+
+
+
+
+##### Classification (linear SVM):
+
+
+
+
+
+
+##### Classification:
+
+
+
+
+##### Semantic Segmentation:
+
+
+
+
+##### Part Segmentation:
+
+
+
+
+
+
+##### Sample Efficiency:
+
+
+
+
+
+
+##### Learning Efficiency:
+
+
+
+
+
+For the description and discussion of the results, please refer to our paper, thanks :)
+
+
+
+### Contributing
+
+The code of this project is released under the MIT License.
+
+We would like to thank and acknowledge referenced codes from the following repositories:
+
+https://github.com/wentaoyuan/pcn
+
+https://github.com/hansen7/NRS_3D
+
+https://github.com/WangYueFt/dgcnn
+
+https://github.com/charlesq34/pointnet
+
+https://github.com/charlesq34/pointnet2
+
+https://github.com/PointCloudLibrary/pcl
+
+https://github.com/AnTao97/dgcnn.pytorch
+
+https://github.com/HuguesTHOMAS/KPConv
+
+https://github.com/QingyongHu/RandLA-Net
+
+https://github.com/chrdiller/pyTorchChamferDistance
+
+https://github.com/yanx27/Pointnet_Pointnet2_pytorch
+
+https://github.com/AnTao97/UnsupervisedPointCloudReconstruction
+
+We appreciate the help from the supportive technicians, Peter and Raf, from Cambridge Engineering :)
diff --git a/zoo/OcCo/render/Depth_Renderer.py b/zoo/OcCo/render/Depth_Renderer.py
new file mode 100644
index 0000000..9ef26b2
--- /dev/null
+++ b/zoo/OcCo/render/Depth_Renderer.py
@@ -0,0 +1,145 @@
+# Copyright (c) 2020. Hanchen Wang, hw501@cam.ac.uk
+# Ref: https://github.com/wentaoyuan/pcn/blob/master/render/render_depth.py
+# Usage: blender -b -P Depth_Renderer.py [ShapeNet directory] [model list] [output directory] [num scans per model]
+
+import os, sys, bpy, time, mathutils, numpy as np
+
+
+def random_pose():
+ """generate a random camera pose"""
+ angle_x = np.random.uniform() * 2 * np.pi
+ angle_y = np.random.uniform() * 2 * np.pi
+ angle_z = np.random.uniform() * 2 * np.pi
+ Rx = np.array([[1, 0, 0],
+ [0, np.cos(angle_x), -np.sin(angle_x)],
+ [0, np.sin(angle_x), np.cos(angle_x)]])
+ Ry = np.array([[np.cos(angle_y), 0, np.sin(angle_y)],
+ [0, 1, 0],
+ [-np.sin(angle_y), 0, np.cos(angle_y)]])
+ Rz = np.array([[np.cos(angle_z), -np.sin(angle_z), 0],
+ [np.sin(angle_z), np.cos(angle_z), 0],
+ [0, 0, 1]])
+ R = np.dot(Rz, np.dot(Ry, Rx))
+ # a rotation matrix with arbitrarily chosen yaw, pitch, roll
+ # Set camera pointing to the origin and 1 unit away from the origin
+ t = np.expand_dims(R[:, 2], 1) # select the third column, reshape into (3, 1)-vector
+
+ # pose -> 4 * 4
+ pose = np.concatenate([np.concatenate([R, t], 1), [[0, 0, 0, 1]]], 0)
+ return pose
+
+
+def setup_blender(width, height, focal_length):
+ """using blender to rendering a scene"""
+ # camera, a class in the bpy
+ camera = bpy.data.objects['Camera']
+ camera.data.angle = np.arctan(width / 2 / focal_length) * 2
+
+ # render layer
+ scene = bpy.context.scene
+ scene.render.filepath = 'buffer'
+ scene.render.image_settings.color_depth = '16'
+ scene.render.resolution_percentage = 100
+ scene.render.resolution_x = width
+ scene.render.resolution_y = height
+
+ # compositor nodes
+ scene.use_nodes = True
+ tree = scene.node_tree
+ rl = tree.nodes.new('CompositorNodeRLayers')
+ output = tree.nodes.new('CompositorNodeOutputFile')
+ output.base_path = ''
+ output.format.file_format = 'OPEN_EXR'
+ # tree.links.new(rl.outputs['Depth'], output.inputs[0])
+ tree.links.new(rl.outputs['Z'], output.inputs[0])
+ # ref: https://github.com/panmari/stanford-shapenet-renderer/issues/8
+
+ # remove default cube
+ bpy.data.objects['Cube'].select = True
+ bpy.ops.object.delete()
+
+ return scene, camera, output
+
+
+if __name__ == '__main__':
+ # Usage: blender -b -P Depth_Renderer.py [ShapeNet directory] [model list] [output directory] [num scans per model]
+ model_dir = sys.argv[-4]
+ list_path = sys.argv[-3]
+ output_dir = sys.argv[-2]
+ num_scans = int(sys.argv[-1])
+
+ '''Generate Intrinsic Camera Matrix'''
+ # High Resolution: width = 1600,
+ # Middle Resolution: width = 1600//4,
+ # Coarse Resolution: width = 1600//10,
+
+ width = 1600//4
+ height = 1200//4
+ focal = 1000//4
+ scene, camera, output = setup_blender(width, height, focal)
+ # offset is the center of images, the unit of focal here is the pixels(on the image)
+ intrinsics = np.array([[focal, 0, width / 2], [0, focal, height / 2], [0, 0, 1]])
+
+ # os.system('rm -rf %s' % output_dir)
+ if not os.path.exists(output_dir):
+ os.makedirs(output_dir)
+ with open(os.path.join(list_path)) as file:
+ model_list = [line.strip() for line in file]
+ open(os.path.join(output_dir, 'blender.log'), 'w+').close()
+ np.savetxt(os.path.join(output_dir, 'intrinsics.txt'), intrinsics, '%f')
+ # camera-referenced system
+
+ num_total_f = len(model_list)
+
+ start = time.time()
+ '''rendering from the mesh to 2.5D depth images'''
+ for idx, model_id in enumerate(model_list):
+ # start = time.time()
+ exr_dir = os.path.join(output_dir, 'exr', model_id)
+ pose_dir = os.path.join(output_dir, 'pose', model_id)
+ os.makedirs(exr_dir)
+ os.makedirs(pose_dir)
+ # os.removedirs(exr_dir)
+ # os.removedirs(pose_dir)
+
+ # Redirect output to log file
+ old_os_out = os.dup(1)
+ os.close(1)
+ os.open(os.path.join(output_dir, 'blender.log'), os.O_WRONLY)
+
+ # Import mesh model
+ # model_path = os.path.join(model_dir, model_id, 'models/model_normalized.obj')
+ # bpy.ops.import_scene.obj(filepath=model_path)
+
+ model_path = os.path.join(model_dir, model_id + '.obj')
+ bpy.ops.import_scene.obj(filepath=model_path)
+
+ # Rotate model by 90 degrees around x-axis (z-up => y-up) to match ShapeNet's coordinates
+ bpy.ops.transform.rotate(value=-np.pi / 2, axis=(1, 0, 0))
+
+ # Render
+ for i in range(num_scans):
+ scene.frame_set(i)
+ pose = random_pose()
+ camera.matrix_world = mathutils.Matrix(pose)
+ # output.file_slots[0].path = os.path.join(exr_dir, '#.exr')
+ output.file_slots[0].path = exr_dir + '_#.exr'
+ bpy.ops.render.render(write_still=True)
+ # np.savetxt(os.path.join(pose_dir, '%d.txt' % i), pose, '%f')
+ np.savetxt(pose_dir + '_%d.txt' % i, pose, '%f')
+
+ # Clean up
+ bpy.ops.object.delete()
+ for m in bpy.data.meshes:
+ bpy.data.meshes.remove(m)
+ for m in bpy.data.materials:
+ m.user_clear()
+ bpy.data.materials.remove(m)
+
+ # Print used time
+ os.close(1)
+ os.dup(old_os_out)
+ os.close(old_os_out)
+ print('%d/%d: %s done, time=%.4f sec' % (idx + 1, num_total_f, model_id, time.time() - start))
+ os.removedirs(exr_dir)
+ os.removedirs(pose_dir)
diff --git a/zoo/OcCo/render/EXR_Process.py b/zoo/OcCo/render/EXR_Process.py
new file mode 100644
index 0000000..235ed6f
--- /dev/null
+++ b/zoo/OcCo/render/EXR_Process.py
@@ -0,0 +1,96 @@
+# Copyright (c) 2020. Hanchen Wang, hw501@cam.ac.uk
+# Ref: https://github.com/wentaoyuan/pcn/blob/master/render/process_exr.py
+
+import os, array, Imath, OpenEXR, argparse, numpy as np, matplotlib.pyplot as plt
+from open3d.open3d.geometry import PointCloud
+from open3d.open3d.utility import Vector3dVector
+from open3d.open3d.io import write_point_cloud
+from tqdm import tqdm
+
+
+def read_exr(exr_path, height, width):
+ """from EXR files to extract depth information"""
+ file = OpenEXR.InputFile(exr_path)
+ depth_arr = array.array('f', file.channel('R', Imath.PixelType(Imath.PixelType.FLOAT)))
+ depth = np.array(depth_arr).reshape((height, width))
+ depth[depth < 0] = 0
+ depth[np.isinf(depth)] = 0
+ return depth
+
+
+def depth2pcd(depth, intrinsics, pose):
+ """backproject to points cloud from 2.5D depth images"""
+ inv_K = np.linalg.inv(intrinsics)
+ inv_K[2, 2] = -1
+ depth = np.flipud(depth) # upside-down
+
+ y, x = np.where(depth > 0)
+ # image coordinates -> camera coordinates
+ points = np.dot(inv_K, np.stack([x, y, np.ones_like(x)] * depth[y, x], 0))
+ # camera coordinates -> world coordinates
+ points = np.dot(pose, np.concatenate([points, np.ones((1, points.shape[1]))], 0)).T[:, :3]
+ return points
+
+
+if __name__ == '__main__':
+ parser = argparse.ArgumentParser()
+ parser.add_argument('--list_file', default=r'./ModelNet_Flist.txt')
+ parser.add_argument('--intrinsics', default=r'./intrinsics.txt')
+ parser.add_argument('--output_dir', default=r'./dump')
+ parser.add_argument('--num_scans', default=10)
+ args = parser.parse_args()
+
+ with open(args.list_file) as file:
+ model_list = file.read().splitlines()
+ intrinsics = np.loadtxt(args.intrinsics_file)
+ width = int(intrinsics[0, 2] * 2)
+ height = int(intrinsics[1, 2] * 2)
+
+ counter = 0
+
+ for model_id in tqdm(model_list):
+ depth_dir = os.path.join(args.output_dir, 'depth')
+ pcd_dir = os.path.join(args.output_dir, 'pcd', model_id)
+ os.makedirs(depth_dir, exist_ok=True)
+ os.makedirs(pcd_dir, exist_ok=True)
+ for i in range(args.num_scans):
+ counter += 1
+
+ exr_path = os.path.join(args.output_dir, 'exr', model_id + '_%d.exr' % i)
+ pose_path = os.path.join(args.output_dir, 'pose', model_id + '_%d.txt' % i)
+ depth_path = os.path.join(args.output_dir, 'depth', model_id + '_%d.npy' % i)
+ depth = read_exr(exr_path, height, width)
+ np.save(depth_path, np.array(depth))
+ # depth_img = Image(np.uint16(depth * 1000))
+ # write_image(os.path.join(depth_dir, '%s_%d.png' % (model_id, i)), depth_img)
+
+ if counter % 1 == 0:
+ counter = 1
+ plt.figure(figsize=(16, 10))
+ plt.imshow(np.array(depth), cmap='inferno')
+ plt.colorbar(label='Normalised Distance to Camera')
+ plt.title('Depth image')
+ plt.xlabel('X Pixel')
+ plt.ylabel('Y Pixel')
+ plt.tight_layout()
+ plt.savefig(os.path.join(depth_dir, model_id.split('/')[-1] + '_%d.png' % i), dpi=200)
+ plt.close()
+
+ pose = np.loadtxt(pose_path)
+ points = depth2pcd(depth, intrinsics, pose)
+ try:
+ normalised_points = points/((points**2).sum(axis=1).max())
+ pcd = PointCloud()
+
+ except:
+ print('there is an exception in the partial normalised process: ', model_id, i)
+
+ # if there is something wrong with the normalisation process, it will automatically save
+ # the previous normalised point cloud for the current objects
+ # pcd.points = Vector3dVector(normalised_points)
+
+ pcd.points = Vector3dVector(points)
+ write_point_cloud(pcd_dir + '_%d.pcd' % i, pcd)
+
+ # os.removedirs(depth_dir)
+ os.removedirs(pcd_dir)
diff --git a/zoo/OcCo/render/ModelNet_Flist.txt b/zoo/OcCo/render/ModelNet_Flist.txt
new file mode 100644
index 0000000..d7b7391
--- /dev/null
+++ b/zoo/OcCo/render/ModelNet_Flist.txt
@@ -0,0 +1,12311 @@
+mantel/train/mantel_0029_normalised
+mantel/train/mantel_0037_normalised
+mantel/train/mantel_0210_normalised
+mantel/train/mantel_0284_normalised
+mantel/train/mantel_0241_normalised
+mantel/train/mantel_0208_normalised
+mantel/train/mantel_0202_normalised
+mantel/train/mantel_0039_normalised
+mantel/train/mantel_0006_normalised
+mantel/train/mantel_0252_normalised
+mantel/train/mantel_0074_normalised
+mantel/train/mantel_0015_normalised
+mantel/train/mantel_0267_normalised
+mantel/train/mantel_0181_normalised
+mantel/train/mantel_0212_normalised
+mantel/train/mantel_0004_normalised
+mantel/train/mantel_0114_normalised
+mantel/train/mantel_0155_normalised
+mantel/train/mantel_0274_normalised
+mantel/train/mantel_0223_normalised
+mantel/train/mantel_0135_normalised
+mantel/train/mantel_0083_normalised
+mantel/train/mantel_0028_normalised
+mantel/train/mantel_0229_normalised
+mantel/train/mantel_0185_normalised
+mantel/train/mantel_0234_normalised
+mantel/train/mantel_0244_normalised
+mantel/train/mantel_0151_normalised
+mantel/train/mantel_0128_normalised
+mantel/train/mantel_0113_normalised
+mantel/train/mantel_0157_normalised
+mantel/train/mantel_0044_normalised
+mantel/train/mantel_0097_normalised
+mantel/train/mantel_0048_normalised
+mantel/train/mantel_0127_normalised
+mantel/train/mantel_0054_normalised
+mantel/train/mantel_0136_normalised
+mantel/train/mantel_0250_normalised
+mantel/train/mantel_0279_normalised
+mantel/train/mantel_0012_normalised
+mantel/train/mantel_0261_normalised
+mantel/train/mantel_0056_normalised
+mantel/train/mantel_0163_normalised
+mantel/train/mantel_0175_normalised
+mantel/train/mantel_0154_normalised
+mantel/train/mantel_0152_normalised
+mantel/train/mantel_0222_normalised
+mantel/train/mantel_0003_normalised
+mantel/train/mantel_0017_normalised
+mantel/train/mantel_0071_normalised
+mantel/train/mantel_0139_normalised
+mantel/train/mantel_0211_normalised
+mantel/train/mantel_0068_normalised
+mantel/train/mantel_0230_normalised
+mantel/train/mantel_0183_normalised
+mantel/train/mantel_0069_normalised
+mantel/train/mantel_0180_normalised
+mantel/train/mantel_0059_normalised
+mantel/train/mantel_0182_normalised
+mantel/train/mantel_0138_normalised
+mantel/train/mantel_0132_normalised
+mantel/train/mantel_0087_normalised
+mantel/train/mantel_0032_normalised
+mantel/train/mantel_0100_normalised
+mantel/train/mantel_0049_normalised
+mantel/train/mantel_0115_normalised
+mantel/train/mantel_0177_normalised
+mantel/train/mantel_0111_normalised
+mantel/train/mantel_0118_normalised
+mantel/train/mantel_0081_normalised
+mantel/train/mantel_0067_normalised
+mantel/train/mantel_0107_normalised
+mantel/train/mantel_0153_normalised
+mantel/train/mantel_0243_normalised
+mantel/train/mantel_0103_normalised
+mantel/train/mantel_0207_normalised
+mantel/train/mantel_0131_normalised
+mantel/train/mantel_0173_normalised
+mantel/train/mantel_0036_normalised
+mantel/train/mantel_0001_normalised
+mantel/train/mantel_0174_normalised
+mantel/train/mantel_0248_normalised
+mantel/train/mantel_0188_normalised
+mantel/train/mantel_0079_normalised
+mantel/train/mantel_0200_normalised
+mantel/train/mantel_0240_normalised
+mantel/train/mantel_0228_normalised
+mantel/train/mantel_0171_normalised
+mantel/train/mantel_0193_normalised
+mantel/train/mantel_0213_normalised
+mantel/train/mantel_0220_normalised
+mantel/train/mantel_0053_normalised
+mantel/train/mantel_0122_normalised
+mantel/train/mantel_0051_normalised
+mantel/train/mantel_0046_normalised
+mantel/train/mantel_0216_normalised
+mantel/train/mantel_0224_normalised
+mantel/train/mantel_0101_normalised
+mantel/train/mantel_0253_normalised
+mantel/train/mantel_0278_normalised
+mantel/train/mantel_0055_normalised
+mantel/train/mantel_0194_normalised
+mantel/train/mantel_0242_normalised
+mantel/train/mantel_0260_normalised
+mantel/train/mantel_0206_normalised
+mantel/train/mantel_0257_normalised
+mantel/train/mantel_0050_normalised
+mantel/train/mantel_0172_normalised
+mantel/train/mantel_0130_normalised
+mantel/train/mantel_0057_normalised
+mantel/train/mantel_0007_normalised
+mantel/train/mantel_0259_normalised
+mantel/train/mantel_0245_normalised
+mantel/train/mantel_0145_normalised
+mantel/train/mantel_0099_normalised
+mantel/train/mantel_0226_normalised
+mantel/train/mantel_0108_normalised
+mantel/train/mantel_0179_normalised
+mantel/train/mantel_0043_normalised
+mantel/train/mantel_0065_normalised
+mantel/train/mantel_0110_normalised
+mantel/train/mantel_0192_normalised
+mantel/train/mantel_0156_normalised
+mantel/train/mantel_0196_normalised
+mantel/train/mantel_0124_normalised
+mantel/train/mantel_0137_normalised
+mantel/train/mantel_0021_normalised
+mantel/train/mantel_0013_normalised
+mantel/train/mantel_0016_normalised
+mantel/train/mantel_0060_normalised
+mantel/train/mantel_0249_normalised
+mantel/train/mantel_0121_normalised
+mantel/train/mantel_0197_normalised
+mantel/train/mantel_0275_normalised
+mantel/train/mantel_0277_normalised
+mantel/train/mantel_0010_normalised
+mantel/train/mantel_0143_normalised
+mantel/train/mantel_0041_normalised
+mantel/train/mantel_0063_normalised
+mantel/train/mantel_0201_normalised
+mantel/train/mantel_0062_normalised
+mantel/train/mantel_0042_normalised
+mantel/train/mantel_0119_normalised
+mantel/train/mantel_0027_normalised
+mantel/train/mantel_0026_normalised
+mantel/train/mantel_0178_normalised
+mantel/train/mantel_0129_normalised
+mantel/train/mantel_0120_normalised
+mantel/train/mantel_0160_normalised
+mantel/train/mantel_0203_normalised
+mantel/train/mantel_0058_normalised
+mantel/train/mantel_0025_normalised
+mantel/train/mantel_0272_normalised
+mantel/train/mantel_0251_normalised
+mantel/train/mantel_0162_normalised
+mantel/train/mantel_0147_normalised
+mantel/train/mantel_0247_normalised
+mantel/train/mantel_0077_normalised
+mantel/train/mantel_0022_normalised
+mantel/train/mantel_0190_normalised
+mantel/train/mantel_0011_normalised
+mantel/train/mantel_0024_normalised
+mantel/train/mantel_0076_normalised
+mantel/train/mantel_0187_normalised
+mantel/train/mantel_0283_normalised
+mantel/train/mantel_0085_normalised
+mantel/train/mantel_0094_normalised
+mantel/train/mantel_0064_normalised
+mantel/train/mantel_0199_normalised
+mantel/train/mantel_0106_normalised
+mantel/train/mantel_0084_normalised
+mantel/train/mantel_0019_normalised
+mantel/train/mantel_0080_normalised
+mantel/train/mantel_0186_normalised
+mantel/train/mantel_0133_normalised
+mantel/train/mantel_0070_normalised
+mantel/train/mantel_0263_normalised
+mantel/train/mantel_0045_normalised
+mantel/train/mantel_0142_normalised
+mantel/train/mantel_0170_normalised
+mantel/train/mantel_0198_normalised
+mantel/train/mantel_0165_normalised
+mantel/train/mantel_0093_normalised
+mantel/train/mantel_0146_normalised
+mantel/train/mantel_0235_normalised
+mantel/train/mantel_0237_normalised
+mantel/train/mantel_0112_normalised
+mantel/train/mantel_0219_normalised
+mantel/train/mantel_0169_normalised
+mantel/train/mantel_0281_normalised
+mantel/train/mantel_0141_normalised
+mantel/train/mantel_0109_normalised
+mantel/train/mantel_0030_normalised
+mantel/train/mantel_0218_normalised
+mantel/train/mantel_0276_normalised
+mantel/train/mantel_0238_normalised
+mantel/train/mantel_0092_normalised
+mantel/train/mantel_0023_normalised
+mantel/train/mantel_0227_normalised
+mantel/train/mantel_0159_normalised
+mantel/train/mantel_0005_normalised
+mantel/train/mantel_0232_normalised
+mantel/train/mantel_0255_normalised
+mantel/train/mantel_0282_normalised
+mantel/train/mantel_0018_normalised
+mantel/train/mantel_0038_normalised
+mantel/train/mantel_0280_normalised
+mantel/train/mantel_0231_normalised
+mantel/train/mantel_0268_normalised
+mantel/train/mantel_0221_normalised
+mantel/train/mantel_0040_normalised
+mantel/train/mantel_0166_normalised
+mantel/train/mantel_0265_normalised
+mantel/train/mantel_0184_normalised
+mantel/train/mantel_0073_normalised
+mantel/train/mantel_0066_normalised
+mantel/train/mantel_0140_normalised
+mantel/train/mantel_0176_normalised
+mantel/train/mantel_0236_normalised
+mantel/train/mantel_0264_normalised
+mantel/train/mantel_0191_normalised
+mantel/train/mantel_0266_normalised
+mantel/train/mantel_0104_normalised
+mantel/train/mantel_0271_normalised
+mantel/train/mantel_0089_normalised
+mantel/train/mantel_0189_normalised
+mantel/train/mantel_0150_normalised
+mantel/train/mantel_0033_normalised
+mantel/train/mantel_0158_normalised
+mantel/train/mantel_0075_normalised
+mantel/train/mantel_0144_normalised
+mantel/train/mantel_0078_normalised
+mantel/train/mantel_0034_normalised
+mantel/train/mantel_0125_normalised
+mantel/train/mantel_0215_normalised
+mantel/train/mantel_0082_normalised
+mantel/train/mantel_0273_normalised
+mantel/train/mantel_0047_normalised
+mantel/train/mantel_0009_normalised
+mantel/train/mantel_0086_normalised
+mantel/train/mantel_0262_normalised
+mantel/train/mantel_0002_normalised
+mantel/train/mantel_0052_normalised
+mantel/train/mantel_0091_normalised
+mantel/train/mantel_0088_normalised
+mantel/train/mantel_0204_normalised
+mantel/train/mantel_0225_normalised
+mantel/train/mantel_0134_normalised
+mantel/train/mantel_0126_normalised
+mantel/train/mantel_0031_normalised
+mantel/train/mantel_0246_normalised
+mantel/train/mantel_0008_normalised
+mantel/train/mantel_0148_normalised
+mantel/train/mantel_0217_normalised
+mantel/train/mantel_0205_normalised
+mantel/train/mantel_0102_normalised
+mantel/train/mantel_0072_normalised
+mantel/train/mantel_0149_normalised
+mantel/train/mantel_0105_normalised
+mantel/train/mantel_0090_normalised
+mantel/train/mantel_0168_normalised
+mantel/train/mantel_0239_normalised
+mantel/train/mantel_0269_normalised
+mantel/train/mantel_0014_normalised
+mantel/train/mantel_0161_normalised
+mantel/train/mantel_0096_normalised
+mantel/train/mantel_0117_normalised
+mantel/train/mantel_0061_normalised
+mantel/train/mantel_0258_normalised
+mantel/train/mantel_0256_normalised
+mantel/train/mantel_0123_normalised
+mantel/train/mantel_0164_normalised
+mantel/train/mantel_0095_normalised
+mantel/train/mantel_0214_normalised
+mantel/train/mantel_0167_normalised
+mantel/train/mantel_0254_normalised
+mantel/train/mantel_0209_normalised
+mantel/train/mantel_0020_normalised
+mantel/train/mantel_0116_normalised
+mantel/train/mantel_0035_normalised
+mantel/train/mantel_0195_normalised
+mantel/train/mantel_0270_normalised
+mantel/train/mantel_0098_normalised
+mantel/train/mantel_0233_normalised
+mantel/test/mantel_0336_normalised
+mantel/test/mantel_0290_normalised
+mantel/test/mantel_0285_normalised
+mantel/test/mantel_0346_normalised
+mantel/test/mantel_0358_normalised
+mantel/test/mantel_0339_normalised
+mantel/test/mantel_0337_normalised
+mantel/test/mantel_0303_normalised
+mantel/test/mantel_0371_normalised
+mantel/test/mantel_0342_normalised
+mantel/test/mantel_0354_normalised
+mantel/test/mantel_0310_normalised
+mantel/test/mantel_0348_normalised
+mantel/test/mantel_0316_normalised
+mantel/test/mantel_0345_normalised
+mantel/test/mantel_0326_normalised
+mantel/test/mantel_0318_normalised
+mantel/test/mantel_0364_normalised
+mantel/test/mantel_0309_normalised
+mantel/test/mantel_0287_normalised
+mantel/test/mantel_0300_normalised
+mantel/test/mantel_0296_normalised
+mantel/test/mantel_0352_normalised
+mantel/test/mantel_0320_normalised
+mantel/test/mantel_0294_normalised
+mantel/test/mantel_0297_normalised
+mantel/test/mantel_0363_normalised
+mantel/test/mantel_0332_normalised
+mantel/test/mantel_0341_normalised
+mantel/test/mantel_0302_normalised
+mantel/test/mantel_0323_normalised
+mantel/test/mantel_0292_normalised
+mantel/test/mantel_0305_normalised
+mantel/test/mantel_0286_normalised
+mantel/test/mantel_0291_normalised
+mantel/test/mantel_0353_normalised
+mantel/test/mantel_0334_normalised
+mantel/test/mantel_0373_normalised
+mantel/test/mantel_0382_normalised
+mantel/test/mantel_0370_normalised
+mantel/test/mantel_0333_normalised
+mantel/test/mantel_0331_normalised
+mantel/test/mantel_0375_normalised
+mantel/test/mantel_0338_normalised
+mantel/test/mantel_0301_normalised
+mantel/test/mantel_0325_normalised
+mantel/test/mantel_0374_normalised
+mantel/test/mantel_0369_normalised
+mantel/test/mantel_0367_normalised
+mantel/test/mantel_0384_normalised
+mantel/test/mantel_0335_normalised
+mantel/test/mantel_0299_normalised
+mantel/test/mantel_0365_normalised
+mantel/test/mantel_0314_normalised
+mantel/test/mantel_0308_normalised
+mantel/test/mantel_0307_normalised
+mantel/test/mantel_0313_normalised
+mantel/test/mantel_0315_normalised
+mantel/test/mantel_0343_normalised
+mantel/test/mantel_0368_normalised
+mantel/test/mantel_0360_normalised
+mantel/test/mantel_0324_normalised
+mantel/test/mantel_0383_normalised
+mantel/test/mantel_0340_normalised
+mantel/test/mantel_0361_normalised
+mantel/test/mantel_0351_normalised
+mantel/test/mantel_0322_normalised
+mantel/test/mantel_0372_normalised
+mantel/test/mantel_0349_normalised
+mantel/test/mantel_0357_normalised
+mantel/test/mantel_0312_normalised
+mantel/test/mantel_0329_normalised
+mantel/test/mantel_0379_normalised
+mantel/test/mantel_0295_normalised
+mantel/test/mantel_0311_normalised
+mantel/test/mantel_0347_normalised
+mantel/test/mantel_0304_normalised
+mantel/test/mantel_0377_normalised
+mantel/test/mantel_0319_normalised
+mantel/test/mantel_0330_normalised
+mantel/test/mantel_0317_normalised
+mantel/test/mantel_0288_normalised
+mantel/test/mantel_0350_normalised
+mantel/test/mantel_0328_normalised
+mantel/test/mantel_0366_normalised
+mantel/test/mantel_0344_normalised
+mantel/test/mantel_0381_normalised
+mantel/test/mantel_0376_normalised
+mantel/test/mantel_0321_normalised
+mantel/test/mantel_0293_normalised
+mantel/test/mantel_0362_normalised
+mantel/test/mantel_0306_normalised
+mantel/test/mantel_0356_normalised
+mantel/test/mantel_0327_normalised
+mantel/test/mantel_0380_normalised
+mantel/test/mantel_0378_normalised
+mantel/test/mantel_0359_normalised
+mantel/test/mantel_0355_normalised
+mantel/test/mantel_0289_normalised
+mantel/test/mantel_0298_normalised
+vase/train/vase_0198_normalised
+vase/train/vase_0104_normalised
+vase/train/vase_0377_normalised
+vase/train/vase_0064_normalised
+vase/train/vase_0405_normalised
+vase/train/vase_0328_normalised
+vase/train/vase_0384_normalised
+vase/train/vase_0299_normalised
+vase/train/vase_0375_normalised
+vase/train/vase_0036_normalised
+vase/train/vase_0286_normalised
+vase/train/vase_0325_normalised
+vase/train/vase_0181_normalised
+vase/train/vase_0318_normalised
+vase/train/vase_0100_normalised
+vase/train/vase_0461_normalised
+vase/train/vase_0334_normalised
+vase/train/vase_0057_normalised
+vase/train/vase_0374_normalised
+vase/train/vase_0469_normalised
+vase/train/vase_0454_normalised
+vase/train/vase_0090_normalised
+vase/train/vase_0209_normalised
+vase/train/vase_0452_normalised
+vase/train/vase_0330_normalised
+vase/train/vase_0457_normalised
+vase/train/vase_0004_normalised
+vase/train/vase_0133_normalised
+vase/train/vase_0431_normalised
+vase/train/vase_0031_normalised
+vase/train/vase_0037_normalised
+vase/train/vase_0034_normalised
+vase/train/vase_0308_normalised
+vase/train/vase_0368_normalised
+vase/train/vase_0144_normalised
+vase/train/vase_0310_normalised
+vase/train/vase_0053_normalised
+vase/train/vase_0391_normalised
+vase/train/vase_0143_normalised
+vase/train/vase_0432_normalised
+vase/train/vase_0108_normalised
+vase/train/vase_0091_normalised
+vase/train/vase_0386_normalised
+vase/train/vase_0438_normalised
+vase/train/vase_0361_normalised
+vase/train/vase_0217_normalised
+vase/train/vase_0351_normalised
+vase/train/vase_0146_normalised
+vase/train/vase_0092_normalised
+vase/train/vase_0124_normalised
+vase/train/vase_0178_normalised
+vase/train/vase_0215_normalised
+vase/train/vase_0032_normalised
+vase/train/vase_0392_normalised
+vase/train/vase_0060_normalised
+vase/train/vase_0383_normalised
+vase/train/vase_0376_normalised
+vase/train/vase_0131_normalised
+vase/train/vase_0419_normalised
+vase/train/vase_0190_normalised
+vase/train/vase_0281_normalised
+vase/train/vase_0398_normalised
+vase/train/vase_0362_normalised
+vase/train/vase_0357_normalised
+vase/train/vase_0358_normalised
+vase/train/vase_0474_normalised
+vase/train/vase_0331_normalised
+vase/train/vase_0121_normalised
+vase/train/vase_0232_normalised
+vase/train/vase_0022_normalised
+vase/train/vase_0068_normalised
+vase/train/vase_0157_normalised
+vase/train/vase_0088_normalised
+vase/train/vase_0338_normalised
+vase/train/vase_0254_normalised
+vase/train/vase_0188_normalised
+vase/train/vase_0109_normalised
+vase/train/vase_0071_normalised
+vase/train/vase_0026_normalised
+vase/train/vase_0235_normalised
+vase/train/vase_0213_normalised
+vase/train/vase_0344_normalised
+vase/train/vase_0139_normalised
+vase/train/vase_0055_normalised
+vase/train/vase_0447_normalised
+vase/train/vase_0283_normalised
+vase/train/vase_0291_normalised
+vase/train/vase_0424_normalised
+vase/train/vase_0007_normalised
+vase/train/vase_0009_normalised
+vase/train/vase_0025_normalised
+vase/train/vase_0394_normalised
+vase/train/vase_0095_normalised
+vase/train/vase_0161_normalised
+vase/train/vase_0065_normalised
+vase/train/vase_0173_normalised
+vase/train/vase_0113_normalised
+vase/train/vase_0341_normalised
+vase/train/vase_0002_normalised
+vase/train/vase_0409_normalised
+vase/train/vase_0335_normalised
+vase/train/vase_0048_normalised
+vase/train/vase_0446_normalised
+vase/train/vase_0155_normalised
+vase/train/vase_0411_normalised
+vase/train/vase_0142_normalised
+vase/train/vase_0029_normalised
+vase/train/vase_0140_normalised
+vase/train/vase_0244_normalised
+vase/train/vase_0003_normalised
+vase/train/vase_0249_normalised
+vase/train/vase_0099_normalised
+vase/train/vase_0404_normalised
+vase/train/vase_0159_normalised
+vase/train/vase_0132_normalised
+vase/train/vase_0280_normalised
+vase/train/vase_0129_normalised
+vase/train/vase_0225_normalised
+vase/train/vase_0282_normalised
+vase/train/vase_0214_normalised
+vase/train/vase_0207_normalised
+vase/train/vase_0073_normalised
+vase/train/vase_0083_normalised
+vase/train/vase_0464_normalised
+vase/train/vase_0472_normalised
+vase/train/vase_0201_normalised
+vase/train/vase_0303_normalised
+vase/train/vase_0006_normalised
+vase/train/vase_0082_normalised
+vase/train/vase_0302_normalised
+vase/train/vase_0089_normalised
+vase/train/vase_0018_normalised
+vase/train/vase_0439_normalised
+vase/train/vase_0059_normalised
+vase/train/vase_0112_normalised
+vase/train/vase_0203_normalised
+vase/train/vase_0301_normalised
+vase/train/vase_0345_normalised
+vase/train/vase_0183_normalised
+vase/train/vase_0047_normalised
+vase/train/vase_0240_normalised
+vase/train/vase_0196_normalised
+vase/train/vase_0321_normalised
+vase/train/vase_0125_normalised
+vase/train/vase_0243_normalised
+vase/train/vase_0237_normalised
+vase/train/vase_0016_normalised
+vase/train/vase_0193_normalised
+vase/train/vase_0260_normalised
+vase/train/vase_0304_normalised
+vase/train/vase_0288_normalised
+vase/train/vase_0128_normalised
+vase/train/vase_0256_normalised
+vase/train/vase_0365_normalised
+vase/train/vase_0150_normalised
+vase/train/vase_0422_normalised
+vase/train/vase_0255_normalised
+vase/train/vase_0218_normalised
+vase/train/vase_0040_normalised
+vase/train/vase_0312_normalised
+vase/train/vase_0176_normalised
+vase/train/vase_0185_normalised
+vase/train/vase_0168_normalised
+vase/train/vase_0182_normalised
+vase/train/vase_0242_normalised
+vase/train/vase_0363_normalised
+vase/train/vase_0019_normalised
+vase/train/vase_0426_normalised
+vase/train/vase_0101_normalised
+vase/train/vase_0410_normalised
+vase/train/vase_0115_normalised
+vase/train/vase_0028_normalised
+vase/train/vase_0013_normalised
+vase/train/vase_0466_normalised
+vase/train/vase_0440_normalised
+vase/train/vase_0075_normalised
+vase/train/vase_0450_normalised
+vase/train/vase_0154_normalised
+vase/train/vase_0292_normalised
+vase/train/vase_0094_normalised
+vase/train/vase_0449_normalised
+vase/train/vase_0023_normalised
+vase/train/vase_0030_normalised
+vase/train/vase_0223_normalised
+vase/train/vase_0056_normalised
+vase/train/vase_0274_normalised
+vase/train/vase_0180_normalised
+vase/train/vase_0211_normalised
+vase/train/vase_0074_normalised
+vase/train/vase_0306_normalised
+vase/train/vase_0273_normalised
+vase/train/vase_0259_normalised
+vase/train/vase_0020_normalised
+vase/train/vase_0387_normalised
+vase/train/vase_0241_normalised
+vase/train/vase_0434_normalised
+vase/train/vase_0420_normalised
+vase/train/vase_0169_normalised
+vase/train/vase_0197_normalised
+vase/train/vase_0443_normalised
+vase/train/vase_0250_normalised
+vase/train/vase_0228_normalised
+vase/train/vase_0370_normalised
+vase/train/vase_0134_normalised
+vase/train/vase_0277_normalised
+vase/train/vase_0289_normalised
+vase/train/vase_0378_normalised
+vase/train/vase_0295_normalised
+vase/train/vase_0135_normalised
+vase/train/vase_0152_normalised
+vase/train/vase_0336_normalised
+vase/train/vase_0216_normalised
+vase/train/vase_0402_normalised
+vase/train/vase_0147_normalised
+vase/train/vase_0296_normalised
+vase/train/vase_0118_normalised
+vase/train/vase_0268_normalised
+vase/train/vase_0267_normalised
+vase/train/vase_0078_normalised
+vase/train/vase_0421_normalised
+vase/train/vase_0165_normalised
+vase/train/vase_0266_normalised
+vase/train/vase_0110_normalised
+vase/train/vase_0427_normalised
+vase/train/vase_0340_normalised
+vase/train/vase_0212_normalised
+vase/train/vase_0046_normalised
+vase/train/vase_0204_normalised
+vase/train/vase_0412_normalised
+vase/train/vase_0024_normalised
+vase/train/vase_0261_normalised
+vase/train/vase_0322_normalised
+vase/train/vase_0263_normalised
+vase/train/vase_0167_normalised
+vase/train/vase_0415_normalised
+vase/train/vase_0317_normalised
+vase/train/vase_0395_normalised
+vase/train/vase_0151_normalised
+vase/train/vase_0067_normalised
+vase/train/vase_0236_normalised
+vase/train/vase_0248_normalised
+vase/train/vase_0414_normalised
+vase/train/vase_0311_normalised
+vase/train/vase_0435_normalised
+vase/train/vase_0364_normalised
+vase/train/vase_0195_normalised
+vase/train/vase_0329_normalised
+vase/train/vase_0130_normalised
+vase/train/vase_0367_normalised
+vase/train/vase_0272_normalised
+vase/train/vase_0061_normalised
+vase/train/vase_0170_normalised
+vase/train/vase_0298_normalised
+vase/train/vase_0284_normalised
+vase/train/vase_0307_normalised
+vase/train/vase_0471_normalised
+vase/train/vase_0077_normalised
+vase/train/vase_0390_normalised
+vase/train/vase_0111_normalised
+vase/train/vase_0175_normalised
+vase/train/vase_0371_normalised
+vase/train/vase_0136_normalised
+vase/train/vase_0238_normalised
+vase/train/vase_0072_normalised
+vase/train/vase_0166_normalised
+vase/train/vase_0049_normalised
+vase/train/vase_0062_normalised
+vase/train/vase_0418_normalised
+vase/train/vase_0008_normalised
+vase/train/vase_0403_normalised
+vase/train/vase_0275_normalised
+vase/train/vase_0231_normalised
+vase/train/vase_0187_normalised
+vase/train/vase_0206_normalised
+vase/train/vase_0285_normalised
+vase/train/vase_0085_normalised
+vase/train/vase_0096_normalised
+vase/train/vase_0429_normalised
+vase/train/vase_0246_normalised
+vase/train/vase_0337_normalised
+vase/train/vase_0448_normalised
+vase/train/vase_0356_normalised
+vase/train/vase_0359_normalised
+vase/train/vase_0010_normalised
+vase/train/vase_0380_normalised
+vase/train/vase_0117_normalised
+vase/train/vase_0262_normalised
+vase/train/vase_0202_normalised
+vase/train/vase_0051_normalised
+vase/train/vase_0093_normalised
+vase/train/vase_0222_normalised
+vase/train/vase_0313_normalised
+vase/train/vase_0189_normalised
+vase/train/vase_0360_normalised
+vase/train/vase_0297_normalised
+vase/train/vase_0171_normalised
+vase/train/vase_0014_normalised
+vase/train/vase_0320_normalised
+vase/train/vase_0253_normalised
+vase/train/vase_0270_normalised
+vase/train/vase_0063_normalised
+vase/train/vase_0191_normalised
+vase/train/vase_0366_normalised
+vase/train/vase_0417_normalised
+vase/train/vase_0208_normalised
+vase/train/vase_0126_normalised
+vase/train/vase_0323_normalised
+vase/train/vase_0005_normalised
+vase/train/vase_0430_normalised
+vase/train/vase_0388_normalised
+vase/train/vase_0379_normalised
+vase/train/vase_0423_normalised
+vase/train/vase_0319_normalised
+vase/train/vase_0470_normalised
+vase/train/vase_0247_normalised
+vase/train/vase_0199_normalised
+vase/train/vase_0086_normalised
+vase/train/vase_0389_normalised
+vase/train/vase_0343_normalised
+vase/train/vase_0251_normalised
+vase/train/vase_0327_normalised
+vase/train/vase_0070_normalised
+vase/train/vase_0107_normalised
+vase/train/vase_0456_normalised
+vase/train/vase_0172_normalised
+vase/train/vase_0407_normalised
+vase/train/vase_0038_normalised
+vase/train/vase_0332_normalised
+vase/train/vase_0138_normalised
+vase/train/vase_0473_normalised
+vase/train/vase_0265_normalised
+vase/train/vase_0179_normalised
+vase/train/vase_0084_normalised
+vase/train/vase_0314_normalised
+vase/train/vase_0373_normalised
+vase/train/vase_0347_normalised
+vase/train/vase_0052_normalised
+vase/train/vase_0044_normalised
+vase/train/vase_0210_normalised
+vase/train/vase_0354_normalised
+vase/train/vase_0369_normalised
+vase/train/vase_0050_normalised
+vase/train/vase_0186_normalised
+vase/train/vase_0425_normalised
+vase/train/vase_0163_normalised
+vase/train/vase_0433_normalised
+vase/train/vase_0252_normalised
+vase/train/vase_0413_normalised
+vase/train/vase_0416_normalised
+vase/train/vase_0160_normalised
+vase/train/vase_0290_normalised
+vase/train/vase_0475_normalised
+vase/train/vase_0227_normalised
+vase/train/vase_0042_normalised
+vase/train/vase_0045_normalised
+vase/train/vase_0396_normalised
+vase/train/vase_0069_normalised
+vase/train/vase_0462_normalised
+vase/train/vase_0385_normalised
+vase/train/vase_0239_normalised
+vase/train/vase_0453_normalised
+vase/train/vase_0279_normalised
+vase/train/vase_0399_normalised
+vase/train/vase_0401_normalised
+vase/train/vase_0164_normalised
+vase/train/vase_0041_normalised
+vase/train/vase_0153_normalised
+vase/train/vase_0120_normalised
+vase/train/vase_0226_normalised
+vase/train/vase_0428_normalised
+vase/train/vase_0350_normalised
+vase/train/vase_0123_normalised
+vase/train/vase_0349_normalised
+vase/train/vase_0406_normalised
+vase/train/vase_0097_normalised
+vase/train/vase_0141_normalised
+vase/train/vase_0076_normalised
+vase/train/vase_0293_normalised
+vase/train/vase_0194_normalised
+vase/train/vase_0393_normalised
+vase/train/vase_0033_normalised
+vase/train/vase_0264_normalised
+vase/train/vase_0114_normalised
+vase/train/vase_0105_normalised
+vase/train/vase_0116_normalised
+vase/train/vase_0400_normalised
+vase/train/vase_0397_normalised
+vase/train/vase_0408_normalised
+vase/train/vase_0200_normalised
+vase/train/vase_0381_normalised
+vase/train/vase_0087_normalised
+vase/train/vase_0445_normalised
+vase/train/vase_0122_normalised
+vase/train/vase_0305_normalised
+vase/train/vase_0355_normalised
+vase/train/vase_0230_normalised
+vase/train/vase_0326_normalised
+vase/train/vase_0027_normalised
+vase/train/vase_0278_normalised
+vase/train/vase_0177_normalised
+vase/train/vase_0333_normalised
+vase/train/vase_0234_normalised
+vase/train/vase_0066_normalised
+vase/train/vase_0309_normalised
+vase/train/vase_0058_normalised
+vase/train/vase_0459_normalised
+vase/train/vase_0352_normalised
+vase/train/vase_0017_normalised
+vase/train/vase_0012_normalised
+vase/train/vase_0316_normalised
+vase/train/vase_0441_normalised
+vase/train/vase_0346_normalised
+vase/train/vase_0148_normalised
+vase/train/vase_0184_normalised
+vase/train/vase_0382_normalised
+vase/train/vase_0081_normalised
+vase/train/vase_0276_normalised
+vase/train/vase_0103_normalised
+vase/train/vase_0451_normalised
+vase/train/vase_0271_normalised
+vase/train/vase_0098_normalised
+vase/train/vase_0463_normalised
+vase/train/vase_0220_normalised
+vase/train/vase_0233_normalised
+vase/train/vase_0149_normalised
+vase/train/vase_0035_normalised
+vase/train/vase_0245_normalised
+vase/train/vase_0043_normalised
+vase/train/vase_0158_normalised
+vase/train/vase_0015_normalised
+vase/train/vase_0080_normalised
+vase/train/vase_0054_normalised
+vase/train/vase_0039_normalised
+vase/train/vase_0455_normalised
+vase/train/vase_0442_normalised
+vase/train/vase_0221_normalised
+vase/train/vase_0339_normalised
+vase/train/vase_0294_normalised
+vase/train/vase_0145_normalised
+vase/train/vase_0079_normalised
+vase/train/vase_0342_normalised
+vase/train/vase_0219_normalised
+vase/train/vase_0372_normalised
+vase/train/vase_0300_normalised
+vase/train/vase_0162_normalised
+vase/train/vase_0436_normalised
+vase/train/vase_0001_normalised
+vase/train/vase_0437_normalised
+vase/train/vase_0106_normalised
+vase/train/vase_0205_normalised
+vase/train/vase_0021_normalised
+vase/train/vase_0315_normalised
+vase/train/vase_0324_normalised
+vase/train/vase_0258_normalised
+vase/train/vase_0119_normalised
+vase/train/vase_0174_normalised
+vase/train/vase_0127_normalised
+vase/train/vase_0353_normalised
+vase/train/vase_0460_normalised
+vase/train/vase_0444_normalised
+vase/train/vase_0458_normalised
+vase/train/vase_0465_normalised
+vase/train/vase_0229_normalised
+vase/train/vase_0287_normalised
+vase/train/vase_0257_normalised
+vase/train/vase_0156_normalised
+vase/train/vase_0467_normalised
+vase/train/vase_0011_normalised
+vase/train/vase_0269_normalised
+vase/train/vase_0102_normalised
+vase/train/vase_0224_normalised
+vase/train/vase_0348_normalised
+vase/train/vase_0137_normalised
+vase/train/vase_0468_normalised
+vase/train/vase_0192_normalised
+vase/test/vase_0522_normalised
+vase/test/vase_0498_normalised
+vase/test/vase_0536_normalised
+vase/test/vase_0532_normalised
+vase/test/vase_0495_normalised
+vase/test/vase_0548_normalised
+vase/test/vase_0551_normalised
+vase/test/vase_0557_normalised
+vase/test/vase_0567_normalised
+vase/test/vase_0496_normalised
+vase/test/vase_0483_normalised
+vase/test/vase_0513_normalised
+vase/test/vase_0499_normalised
+vase/test/vase_0506_normalised
+vase/test/vase_0540_normalised
+vase/test/vase_0528_normalised
+vase/test/vase_0542_normalised
+vase/test/vase_0569_normalised
+vase/test/vase_0534_normalised
+vase/test/vase_0480_normalised
+vase/test/vase_0510_normalised
+vase/test/vase_0556_normalised
+vase/test/vase_0554_normalised
+vase/test/vase_0501_normalised
+vase/test/vase_0508_normalised
+vase/test/vase_0549_normalised
+vase/test/vase_0547_normalised
+vase/test/vase_0544_normalised
+vase/test/vase_0486_normalised
+vase/test/vase_0565_normalised
+vase/test/vase_0538_normalised
+vase/test/vase_0491_normalised
+vase/test/vase_0481_normalised
+vase/test/vase_0482_normalised
+vase/test/vase_0509_normalised
+vase/test/vase_0511_normalised
+vase/test/vase_0559_normalised
+vase/test/vase_0535_normalised
+vase/test/vase_0516_normalised
+vase/test/vase_0537_normalised
+vase/test/vase_0489_normalised
+vase/test/vase_0566_normalised
+vase/test/vase_0525_normalised
+vase/test/vase_0493_normalised
+vase/test/vase_0477_normalised
+vase/test/vase_0572_normalised
+vase/test/vase_0505_normalised
+vase/test/vase_0517_normalised
+vase/test/vase_0520_normalised
+vase/test/vase_0497_normalised
+vase/test/vase_0503_normalised
+vase/test/vase_0539_normalised
+vase/test/vase_0545_normalised
+vase/test/vase_0563_normalised
+vase/test/vase_0543_normalised
+vase/test/vase_0530_normalised
+vase/test/vase_0526_normalised
+vase/test/vase_0523_normalised
+vase/test/vase_0524_normalised
+vase/test/vase_0570_normalised
+vase/test/vase_0529_normalised
+vase/test/vase_0560_normalised
+vase/test/vase_0507_normalised
+vase/test/vase_0558_normalised
+vase/test/vase_0514_normalised
+vase/test/vase_0552_normalised
+vase/test/vase_0512_normalised
+vase/test/vase_0521_normalised
+vase/test/vase_0574_normalised
+vase/test/vase_0575_normalised
+vase/test/vase_0553_normalised
+vase/test/vase_0571_normalised
+vase/test/vase_0533_normalised
+vase/test/vase_0562_normalised
+vase/test/vase_0564_normalised
+vase/test/vase_0519_normalised
+vase/test/vase_0504_normalised
+vase/test/vase_0541_normalised
+vase/test/vase_0531_normalised
+vase/test/vase_0478_normalised
+vase/test/vase_0550_normalised
+vase/test/vase_0492_normalised
+vase/test/vase_0490_normalised
+vase/test/vase_0476_normalised
+vase/test/vase_0561_normalised
+vase/test/vase_0502_normalised
+vase/test/vase_0546_normalised
+vase/test/vase_0488_normalised
+vase/test/vase_0527_normalised
+vase/test/vase_0555_normalised
+vase/test/vase_0500_normalised
+vase/test/vase_0485_normalised
+vase/test/vase_0515_normalised
+vase/test/vase_0573_normalised
+vase/test/vase_0568_normalised
+vase/test/vase_0479_normalised
+vase/test/vase_0518_normalised
+vase/test/vase_0494_normalised
+vase/test/vase_0484_normalised
+vase/test/vase_0487_normalised
+bowl/train/bowl_0050_normalised
+bowl/train/bowl_0031_normalised
+bowl/train/bowl_0038_normalised
+bowl/train/bowl_0051_normalised
+bowl/train/bowl_0026_normalised
+bowl/train/bowl_0019_normalised
+bowl/train/bowl_0005_normalised
+bowl/train/bowl_0054_normalised
+bowl/train/bowl_0048_normalised
+bowl/train/bowl_0013_normalised
+bowl/train/bowl_0044_normalised
+bowl/train/bowl_0025_normalised
+bowl/train/bowl_0027_normalised
+bowl/train/bowl_0022_normalised
+bowl/train/bowl_0035_normalised
+bowl/train/bowl_0001_normalised
+bowl/train/bowl_0017_normalised
+bowl/train/bowl_0029_normalised
+bowl/train/bowl_0015_normalised
+bowl/train/bowl_0011_normalised
+bowl/train/bowl_0018_normalised
+bowl/train/bowl_0043_normalised
+bowl/train/bowl_0049_normalised
+bowl/train/bowl_0042_normalised
+bowl/train/bowl_0004_normalised
+bowl/train/bowl_0041_normalised
+bowl/train/bowl_0009_normalised
+bowl/train/bowl_0062_normalised
+bowl/train/bowl_0060_normalised
+bowl/train/bowl_0055_normalised
+bowl/train/bowl_0003_normalised
+bowl/train/bowl_0030_normalised
+bowl/train/bowl_0039_normalised
+bowl/train/bowl_0052_normalised
+bowl/train/bowl_0059_normalised
+bowl/train/bowl_0028_normalised
+bowl/train/bowl_0064_normalised
+bowl/train/bowl_0016_normalised
+bowl/train/bowl_0061_normalised
+bowl/train/bowl_0057_normalised
+bowl/train/bowl_0007_normalised
+bowl/train/bowl_0053_normalised
+bowl/train/bowl_0058_normalised
+bowl/train/bowl_0021_normalised
+bowl/train/bowl_0056_normalised
+bowl/train/bowl_0037_normalised
+bowl/train/bowl_0023_normalised
+bowl/train/bowl_0032_normalised
+bowl/train/bowl_0033_normalised
+bowl/train/bowl_0036_normalised
+bowl/train/bowl_0040_normalised
+bowl/train/bowl_0008_normalised
+bowl/train/bowl_0046_normalised
+bowl/train/bowl_0010_normalised
+bowl/train/bowl_0012_normalised
+bowl/train/bowl_0020_normalised
+bowl/train/bowl_0014_normalised
+bowl/train/bowl_0045_normalised
+bowl/train/bowl_0047_normalised
+bowl/train/bowl_0024_normalised
+bowl/train/bowl_0002_normalised
+bowl/train/bowl_0063_normalised
+bowl/train/bowl_0034_normalised
+bowl/train/bowl_0006_normalised
+bowl/test/bowl_0082_normalised
+bowl/test/bowl_0073_normalised
+bowl/test/bowl_0067_normalised
+bowl/test/bowl_0069_normalised
+bowl/test/bowl_0080_normalised
+bowl/test/bowl_0070_normalised
+bowl/test/bowl_0084_normalised
+bowl/test/bowl_0072_normalised
+bowl/test/bowl_0081_normalised
+bowl/test/bowl_0065_normalised
+bowl/test/bowl_0066_normalised
+bowl/test/bowl_0083_normalised
+bowl/test/bowl_0078_normalised
+bowl/test/bowl_0076_normalised
+bowl/test/bowl_0075_normalised
+bowl/test/bowl_0074_normalised
+bowl/test/bowl_0068_normalised
+bowl/test/bowl_0079_normalised
+bowl/test/bowl_0071_normalised
+bowl/test/bowl_0077_normalised
+monitor/train/monitor_0153_normalised
+monitor/train/monitor_0333_normalised
+monitor/train/monitor_0204_normalised
+monitor/train/monitor_0053_normalised
+monitor/train/monitor_0141_normalised
+monitor/train/monitor_0279_normalised
+monitor/train/monitor_0159_normalised
+monitor/train/monitor_0158_normalised
+monitor/train/monitor_0250_normalised
+monitor/train/monitor_0205_normalised
+monitor/train/monitor_0020_normalised
+monitor/train/monitor_0316_normalised
+monitor/train/monitor_0126_normalised
+monitor/train/monitor_0339_normalised
+monitor/train/monitor_0086_normalised
+monitor/train/monitor_0219_normalised
+monitor/train/monitor_0226_normalised
+monitor/train/monitor_0001_normalised
+monitor/train/monitor_0256_normalised
+monitor/train/monitor_0409_normalised
+monitor/train/monitor_0125_normalised
+monitor/train/monitor_0380_normalised
+monitor/train/monitor_0277_normalised
+monitor/train/monitor_0064_normalised
+monitor/train/monitor_0340_normalised
+monitor/train/monitor_0173_normalised
+monitor/train/monitor_0444_normalised
+monitor/train/monitor_0140_normalised
+monitor/train/monitor_0131_normalised
+monitor/train/monitor_0358_normalised
+monitor/train/monitor_0424_normalised
+monitor/train/monitor_0451_normalised
+monitor/train/monitor_0022_normalised
+monitor/train/monitor_0080_normalised
+monitor/train/monitor_0148_normalised
+monitor/train/monitor_0386_normalised
+monitor/train/monitor_0324_normalised
+monitor/train/monitor_0088_normalised
+monitor/train/monitor_0076_normalised
+monitor/train/monitor_0273_normalised
+monitor/train/monitor_0121_normalised
+monitor/train/monitor_0100_normalised
+monitor/train/monitor_0046_normalised
+monitor/train/monitor_0310_normalised
+monitor/train/monitor_0332_normalised
+monitor/train/monitor_0172_normalised
+monitor/train/monitor_0374_normalised
+monitor/train/monitor_0317_normalised
+monitor/train/monitor_0354_normalised
+monitor/train/monitor_0342_normalised
+monitor/train/monitor_0166_normalised
+monitor/train/monitor_0388_normalised
+monitor/train/monitor_0190_normalised
+monitor/train/monitor_0106_normalised
+monitor/train/monitor_0315_normalised
+monitor/train/monitor_0055_normalised
+monitor/train/monitor_0338_normalised
+monitor/train/monitor_0066_normalised
+monitor/train/monitor_0248_normalised
+monitor/train/monitor_0102_normalised
+monitor/train/monitor_0459_normalised
+monitor/train/monitor_0132_normalised
+monitor/train/monitor_0156_normalised
+monitor/train/monitor_0160_normalised
+monitor/train/monitor_0075_normalised
+monitor/train/monitor_0063_normalised
+monitor/train/monitor_0382_normalised
+monitor/train/monitor_0208_normalised
+monitor/train/monitor_0413_normalised
+monitor/train/monitor_0124_normalised
+monitor/train/monitor_0214_normalised
+monitor/train/monitor_0133_normalised
+monitor/train/monitor_0081_normalised
+monitor/train/monitor_0411_normalised
+monitor/train/monitor_0312_normalised
+monitor/train/monitor_0187_normalised
+monitor/train/monitor_0201_normalised
+monitor/train/monitor_0300_normalised
+monitor/train/monitor_0280_normalised
+monitor/train/monitor_0128_normalised
+monitor/train/monitor_0334_normalised
+monitor/train/monitor_0243_normalised
+monitor/train/monitor_0067_normalised
+monitor/train/monitor_0435_normalised
+monitor/train/monitor_0393_normalised
+monitor/train/monitor_0186_normalised
+monitor/train/monitor_0321_normalised
+monitor/train/monitor_0034_normalised
+monitor/train/monitor_0272_normalised
+monitor/train/monitor_0271_normalised
+monitor/train/monitor_0403_normalised
+monitor/train/monitor_0419_normalised
+monitor/train/monitor_0050_normalised
+monitor/train/monitor_0071_normalised
+monitor/train/monitor_0406_normalised
+monitor/train/monitor_0239_normalised
+monitor/train/monitor_0116_normalised
+monitor/train/monitor_0290_normalised
+monitor/train/monitor_0045_normalised
+monitor/train/monitor_0335_normalised
+monitor/train/monitor_0263_normalised
+monitor/train/monitor_0010_normalised
+monitor/train/monitor_0288_normalised
+monitor/train/monitor_0068_normalised
+monitor/train/monitor_0301_normalised
+monitor/train/monitor_0123_normalised
+monitor/train/monitor_0412_normalised
+monitor/train/monitor_0448_normalised
+monitor/train/monitor_0089_normalised
+monitor/train/monitor_0287_normalised
+monitor/train/monitor_0258_normalised
+monitor/train/monitor_0161_normalised
+monitor/train/monitor_0165_normalised
+monitor/train/monitor_0259_normalised
+monitor/train/monitor_0097_normalised
+monitor/train/monitor_0069_normalised
+monitor/train/monitor_0306_normalised
+monitor/train/monitor_0142_normalised
+monitor/train/monitor_0058_normalised
+monitor/train/monitor_0127_normalised
+monitor/train/monitor_0139_normalised
+monitor/train/monitor_0215_normalised
+monitor/train/monitor_0041_normalised
+monitor/train/monitor_0117_normalised
+monitor/train/monitor_0200_normalised
+monitor/train/monitor_0442_normalised
+monitor/train/monitor_0177_normalised
+monitor/train/monitor_0202_normalised
+monitor/train/monitor_0307_normalised
+monitor/train/monitor_0237_normalised
+monitor/train/monitor_0426_normalised
+monitor/train/monitor_0213_normalised
+monitor/train/monitor_0061_normalised
+monitor/train/monitor_0225_normalised
+monitor/train/monitor_0051_normalised
+monitor/train/monitor_0114_normalised
+monitor/train/monitor_0152_normalised
+monitor/train/monitor_0236_normalised
+monitor/train/monitor_0144_normalised
+monitor/train/monitor_0189_normalised
+monitor/train/monitor_0281_normalised
+monitor/train/monitor_0349_normalised
+monitor/train/monitor_0129_normalised
+monitor/train/monitor_0107_normalised
+monitor/train/monitor_0112_normalised
+monitor/train/monitor_0005_normalised
+monitor/train/monitor_0381_normalised
+monitor/train/monitor_0452_normalised
+monitor/train/monitor_0291_normalised
+monitor/train/monitor_0014_normalised
+monitor/train/monitor_0085_normalised
+monitor/train/monitor_0458_normalised
+monitor/train/monitor_0143_normalised
+monitor/train/monitor_0035_normalised
+monitor/train/monitor_0249_normalised
+monitor/train/monitor_0325_normalised
+monitor/train/monitor_0233_normalised
+monitor/train/monitor_0318_normalised
+monitor/train/monitor_0352_normalised
+monitor/train/monitor_0293_normalised
+monitor/train/monitor_0115_normalised
+monitor/train/monitor_0350_normalised
+monitor/train/monitor_0345_normalised
+monitor/train/monitor_0429_normalised
+monitor/train/monitor_0244_normalised
+monitor/train/monitor_0257_normalised
+monitor/train/monitor_0235_normalised
+monitor/train/monitor_0037_normalised
+monitor/train/monitor_0042_normalised
+monitor/train/monitor_0024_normalised
+monitor/train/monitor_0136_normalised
+monitor/train/monitor_0163_normalised
+monitor/train/monitor_0157_normalised
+monitor/train/monitor_0240_normalised
+monitor/train/monitor_0169_normalised
+monitor/train/monitor_0004_normalised
+monitor/train/monitor_0427_normalised
+monitor/train/monitor_0049_normalised
+monitor/train/monitor_0191_normalised
+monitor/train/monitor_0074_normalised
+monitor/train/monitor_0054_normalised
+monitor/train/monitor_0432_normalised
+monitor/train/monitor_0445_normalised
+monitor/train/monitor_0031_normalised
+monitor/train/monitor_0434_normalised
+monitor/train/monitor_0286_normalised
+monitor/train/monitor_0164_normalised
+monitor/train/monitor_0137_normalised
+monitor/train/monitor_0180_normalised
+monitor/train/monitor_0439_normalised
+monitor/train/monitor_0373_normalised
+monitor/train/monitor_0234_normalised
+monitor/train/monitor_0360_normalised
+monitor/train/monitor_0062_normalised
+monitor/train/monitor_0328_normalised
+monitor/train/monitor_0383_normalised
+monitor/train/monitor_0405_normalised
+monitor/train/monitor_0028_normalised
+monitor/train/monitor_0443_normalised
+monitor/train/monitor_0422_normalised
+monitor/train/monitor_0021_normalised
+monitor/train/monitor_0449_normalised
+monitor/train/monitor_0267_normalised
+monitor/train/monitor_0260_normalised
+monitor/train/monitor_0297_normalised
+monitor/train/monitor_0430_normalised
+monitor/train/monitor_0111_normalised
+monitor/train/monitor_0105_normalised
+monitor/train/monitor_0222_normalised
+monitor/train/monitor_0029_normalised
+monitor/train/monitor_0417_normalised
+monitor/train/monitor_0149_normalised
+monitor/train/monitor_0103_normalised
+monitor/train/monitor_0032_normalised
+monitor/train/monitor_0266_normalised
+monitor/train/monitor_0145_normalised
+monitor/train/monitor_0018_normalised
+monitor/train/monitor_0090_normalised
+monitor/train/monitor_0351_normalised
+monitor/train/monitor_0241_normalised
+monitor/train/monitor_0418_normalised
+monitor/train/monitor_0231_normalised
+monitor/train/monitor_0199_normalised
+monitor/train/monitor_0092_normalised
+monitor/train/monitor_0387_normalised
+monitor/train/monitor_0450_normalised
+monitor/train/monitor_0059_normalised
+monitor/train/monitor_0461_normalised
+monitor/train/monitor_0394_normalised
+monitor/train/monitor_0320_normalised
+monitor/train/monitor_0094_normalised
+monitor/train/monitor_0275_normalised
+monitor/train/monitor_0282_normalised
+monitor/train/monitor_0184_normalised
+monitor/train/monitor_0456_normalised
+monitor/train/monitor_0247_normalised
+monitor/train/monitor_0463_normalised
+monitor/train/monitor_0082_normalised
+monitor/train/monitor_0436_normalised
+monitor/train/monitor_0308_normalised
+monitor/train/monitor_0192_normalised
+monitor/train/monitor_0401_normalised
+monitor/train/monitor_0361_normalised
+monitor/train/monitor_0416_normalised
+monitor/train/monitor_0353_normalised
+monitor/train/monitor_0400_normalised
+monitor/train/monitor_0385_normalised
+monitor/train/monitor_0396_normalised
+monitor/train/monitor_0438_normalised
+monitor/train/monitor_0437_normalised
+monitor/train/monitor_0196_normalised
+monitor/train/monitor_0220_normalised
+monitor/train/monitor_0255_normalised
+monitor/train/monitor_0254_normalised
+monitor/train/monitor_0043_normalised
+monitor/train/monitor_0295_normalised
+monitor/train/monitor_0276_normalised
+monitor/train/monitor_0211_normalised
+monitor/train/monitor_0404_normalised
+monitor/train/monitor_0346_normalised
+monitor/train/monitor_0232_normalised
+monitor/train/monitor_0091_normalised
+monitor/train/monitor_0217_normalised
+monitor/train/monitor_0392_normalised
+monitor/train/monitor_0078_normalised
+monitor/train/monitor_0284_normalised
+monitor/train/monitor_0195_normalised
+monitor/train/monitor_0104_normalised
+monitor/train/monitor_0384_normalised
+monitor/train/monitor_0083_normalised
+monitor/train/monitor_0056_normalised
+monitor/train/monitor_0303_normalised
+monitor/train/monitor_0407_normalised
+monitor/train/monitor_0134_normalised
+monitor/train/monitor_0040_normalised
+monitor/train/monitor_0030_normalised
+monitor/train/monitor_0012_normalised
+monitor/train/monitor_0329_normalised
+monitor/train/monitor_0414_normalised
+monitor/train/monitor_0181_normalised
+monitor/train/monitor_0278_normalised
+monitor/train/monitor_0355_normalised
+monitor/train/monitor_0228_normalised
+monitor/train/monitor_0343_normalised
+monitor/train/monitor_0326_normalised
+monitor/train/monitor_0203_normalised
+monitor/train/monitor_0313_normalised
+monitor/train/monitor_0065_normalised
+monitor/train/monitor_0370_normalised
+monitor/train/monitor_0130_normalised
+monitor/train/monitor_0299_normalised
+monitor/train/monitor_0175_normalised
+monitor/train/monitor_0221_normalised
+monitor/train/monitor_0268_normalised
+monitor/train/monitor_0072_normalised
+monitor/train/monitor_0344_normalised
+monitor/train/monitor_0229_normalised
+monitor/train/monitor_0264_normalised
+monitor/train/monitor_0147_normalised
+monitor/train/monitor_0011_normalised
+monitor/train/monitor_0242_normalised
+monitor/train/monitor_0391_normalised
+monitor/train/monitor_0162_normalised
+monitor/train/monitor_0212_normalised
+monitor/train/monitor_0182_normalised
+monitor/train/monitor_0033_normalised
+monitor/train/monitor_0441_normalised
+monitor/train/monitor_0410_normalised
+monitor/train/monitor_0025_normalised
+monitor/train/monitor_0003_normalised
+monitor/train/monitor_0425_normalised
+monitor/train/monitor_0305_normalised
+monitor/train/monitor_0099_normalised
+monitor/train/monitor_0178_normalised
+monitor/train/monitor_0108_normalised
+monitor/train/monitor_0095_normalised
+monitor/train/monitor_0079_normalised
+monitor/train/monitor_0357_normalised
+monitor/train/monitor_0398_normalised
+monitor/train/monitor_0183_normalised
+monitor/train/monitor_0465_normalised
+monitor/train/monitor_0216_normalised
+monitor/train/monitor_0423_normalised
+monitor/train/monitor_0209_normalised
+monitor/train/monitor_0375_normalised
+monitor/train/monitor_0016_normalised
+monitor/train/monitor_0348_normalised
+monitor/train/monitor_0265_normalised
+monitor/train/monitor_0337_normalised
+monitor/train/monitor_0397_normalised
+monitor/train/monitor_0341_normalised
+monitor/train/monitor_0019_normalised
+monitor/train/monitor_0188_normalised
+monitor/train/monitor_0390_normalised
+monitor/train/monitor_0365_normalised
+monitor/train/monitor_0057_normalised
+monitor/train/monitor_0372_normalised
+monitor/train/monitor_0283_normalised
+monitor/train/monitor_0210_normalised
+monitor/train/monitor_0376_normalised
+monitor/train/monitor_0093_normalised
+monitor/train/monitor_0378_normalised
+monitor/train/monitor_0023_normalised
+monitor/train/monitor_0262_normalised
+monitor/train/monitor_0302_normalised
+monitor/train/monitor_0098_normalised
+monitor/train/monitor_0431_normalised
+monitor/train/monitor_0389_normalised
+monitor/train/monitor_0207_normalised
+monitor/train/monitor_0120_normalised
+monitor/train/monitor_0013_normalised
+monitor/train/monitor_0176_normalised
+monitor/train/monitor_0377_normalised
+monitor/train/monitor_0296_normalised
+monitor/train/monitor_0138_normalised
+monitor/train/monitor_0185_normalised
+monitor/train/monitor_0002_normalised
+monitor/train/monitor_0027_normalised
+monitor/train/monitor_0322_normalised
+monitor/train/monitor_0292_normalised
+monitor/train/monitor_0110_normalised
+monitor/train/monitor_0109_normalised
+monitor/train/monitor_0174_normalised
+monitor/train/monitor_0238_normalised
+monitor/train/monitor_0285_normalised
+monitor/train/monitor_0330_normalised
+monitor/train/monitor_0052_normalised
+monitor/train/monitor_0227_normalised
+monitor/train/monitor_0015_normalised
+monitor/train/monitor_0077_normalised
+monitor/train/monitor_0245_normalised
+monitor/train/monitor_0289_normalised
+monitor/train/monitor_0167_normalised
+monitor/train/monitor_0269_normalised
+monitor/train/monitor_0362_normalised
+monitor/train/monitor_0155_normalised
+monitor/train/monitor_0154_normalised
+monitor/train/monitor_0359_normalised
+monitor/train/monitor_0457_normalised
+monitor/train/monitor_0428_normalised
+monitor/train/monitor_0179_normalised
+monitor/train/monitor_0415_normalised
+monitor/train/monitor_0347_normalised
+monitor/train/monitor_0395_normalised
+monitor/train/monitor_0399_normalised
+monitor/train/monitor_0170_normalised
+monitor/train/monitor_0206_normalised
+monitor/train/monitor_0270_normalised
+monitor/train/monitor_0460_normalised
+monitor/train/monitor_0084_normalised
+monitor/train/monitor_0274_normalised
+monitor/train/monitor_0446_normalised
+monitor/train/monitor_0151_normalised
+monitor/train/monitor_0369_normalised
+monitor/train/monitor_0294_normalised
+monitor/train/monitor_0252_normalised
+monitor/train/monitor_0440_normalised
+monitor/train/monitor_0168_normalised
+monitor/train/monitor_0246_normalised
+monitor/train/monitor_0314_normalised
+monitor/train/monitor_0309_normalised
+monitor/train/monitor_0038_normalised
+monitor/train/monitor_0367_normalised
+monitor/train/monitor_0454_normalised
+monitor/train/monitor_0366_normalised
+monitor/train/monitor_0146_normalised
+monitor/train/monitor_0113_normalised
+monitor/train/monitor_0135_normalised
+monitor/train/monitor_0253_normalised
+monitor/train/monitor_0371_normalised
+monitor/train/monitor_0224_normalised
+monitor/train/monitor_0311_normalised
+monitor/train/monitor_0363_normalised
+monitor/train/monitor_0402_normalised
+monitor/train/monitor_0039_normalised
+monitor/train/monitor_0251_normalised
+monitor/train/monitor_0453_normalised
+monitor/train/monitor_0304_normalised
+monitor/train/monitor_0319_normalised
+monitor/train/monitor_0197_normalised
+monitor/train/monitor_0017_normalised
+monitor/train/monitor_0408_normalised
+monitor/train/monitor_0087_normalised
+monitor/train/monitor_0048_normalised
+monitor/train/monitor_0026_normalised
+monitor/train/monitor_0462_normalised
+monitor/train/monitor_0331_normalised
+monitor/train/monitor_0101_normalised
+monitor/train/monitor_0070_normalised
+monitor/train/monitor_0218_normalised
+monitor/train/monitor_0364_normalised
+monitor/train/monitor_0150_normalised
+monitor/train/monitor_0096_normalised
+monitor/train/monitor_0421_normalised
+monitor/train/monitor_0194_normalised
+monitor/train/monitor_0261_normalised
+monitor/train/monitor_0455_normalised
+monitor/train/monitor_0230_normalised
+monitor/train/monitor_0447_normalised
+monitor/train/monitor_0122_normalised
+monitor/train/monitor_0223_normalised
+monitor/train/monitor_0379_normalised
+monitor/train/monitor_0036_normalised
+monitor/train/monitor_0006_normalised
+monitor/train/monitor_0171_normalised
+monitor/train/monitor_0044_normalised
+monitor/train/monitor_0298_normalised
+monitor/train/monitor_0464_normalised
+monitor/train/monitor_0336_normalised
+monitor/train/monitor_0007_normalised
+monitor/train/monitor_0356_normalised
+monitor/train/monitor_0118_normalised
+monitor/train/monitor_0420_normalised
+monitor/train/monitor_0433_normalised
+monitor/train/monitor_0009_normalised
+monitor/train/monitor_0047_normalised
+monitor/train/monitor_0323_normalised
+monitor/train/monitor_0193_normalised
+monitor/train/monitor_0060_normalised
+monitor/train/monitor_0198_normalised
+monitor/train/monitor_0368_normalised
+monitor/train/monitor_0327_normalised
+monitor/train/monitor_0073_normalised
+monitor/train/monitor_0119_normalised
+monitor/train/monitor_0008_normalised
+monitor/test/monitor_0536_normalised
+monitor/test/monitor_0506_normalised
+monitor/test/monitor_0559_normalised
+monitor/test/monitor_0542_normalised
+monitor/test/monitor_0513_normalised
+monitor/test/monitor_0473_normalised
+monitor/test/monitor_0550_normalised
+monitor/test/monitor_0512_normalised
+monitor/test/monitor_0507_normalised
+monitor/test/monitor_0498_normalised
+monitor/test/monitor_0471_normalised
+monitor/test/monitor_0477_normalised
+monitor/test/monitor_0468_normalised
+monitor/test/monitor_0546_normalised
+monitor/test/monitor_0472_normalised
+monitor/test/monitor_0499_normalised
+monitor/test/monitor_0562_normalised
+monitor/test/monitor_0531_normalised
+monitor/test/monitor_0530_normalised
+monitor/test/monitor_0484_normalised
+monitor/test/monitor_0482_normalised
+monitor/test/monitor_0523_normalised
+monitor/test/monitor_0485_normalised
+monitor/test/monitor_0526_normalised
+monitor/test/monitor_0491_normalised
+monitor/test/monitor_0509_normalised
+monitor/test/monitor_0493_normalised
+monitor/test/monitor_0552_normalised
+monitor/test/monitor_0527_normalised
+monitor/test/monitor_0556_normalised
+monitor/test/monitor_0502_normalised
+monitor/test/monitor_0554_normalised
+monitor/test/monitor_0541_normalised
+monitor/test/monitor_0483_normalised
+monitor/test/monitor_0545_normalised
+monitor/test/monitor_0497_normalised
+monitor/test/monitor_0480_normalised
+monitor/test/monitor_0489_normalised
+monitor/test/monitor_0476_normalised
+monitor/test/monitor_0516_normalised
+monitor/test/monitor_0486_normalised
+monitor/test/monitor_0514_normalised
+monitor/test/monitor_0520_normalised
+monitor/test/monitor_0470_normalised
+monitor/test/monitor_0558_normalised
+monitor/test/monitor_0535_normalised
+monitor/test/monitor_0495_normalised
+monitor/test/monitor_0519_normalised
+monitor/test/monitor_0511_normalised
+monitor/test/monitor_0565_normalised
+monitor/test/monitor_0518_normalised
+monitor/test/monitor_0543_normalised
+monitor/test/monitor_0479_normalised
+monitor/test/monitor_0492_normalised
+monitor/test/monitor_0553_normalised
+monitor/test/monitor_0525_normalised
+monitor/test/monitor_0533_normalised
+monitor/test/monitor_0515_normalised
+monitor/test/monitor_0538_normalised
+monitor/test/monitor_0517_normalised
+monitor/test/monitor_0487_normalised
+monitor/test/monitor_0474_normalised
+monitor/test/monitor_0557_normalised
+monitor/test/monitor_0528_normalised
+monitor/test/monitor_0547_normalised
+monitor/test/monitor_0510_normalised
+monitor/test/monitor_0539_normalised
+monitor/test/monitor_0551_normalised
+monitor/test/monitor_0540_normalised
+monitor/test/monitor_0503_normalised
+monitor/test/monitor_0488_normalised
+monitor/test/monitor_0500_normalised
+monitor/test/monitor_0544_normalised
+monitor/test/monitor_0537_normalised
+monitor/test/monitor_0501_normalised
+monitor/test/monitor_0564_normalised
+monitor/test/monitor_0522_normalised
+monitor/test/monitor_0521_normalised
+monitor/test/monitor_0548_normalised
+monitor/test/monitor_0549_normalised
+monitor/test/monitor_0475_normalised
+monitor/test/monitor_0504_normalised
+monitor/test/monitor_0524_normalised
+monitor/test/monitor_0529_normalised
+monitor/test/monitor_0469_normalised
+monitor/test/monitor_0467_normalised
+monitor/test/monitor_0560_normalised
+monitor/test/monitor_0490_normalised
+monitor/test/monitor_0505_normalised
+monitor/test/monitor_0494_normalised
+monitor/test/monitor_0466_normalised
+monitor/test/monitor_0478_normalised
+monitor/test/monitor_0496_normalised
+monitor/test/monitor_0561_normalised
+monitor/test/monitor_0481_normalised
+monitor/test/monitor_0508_normalised
+monitor/test/monitor_0555_normalised
+monitor/test/monitor_0534_normalised
+monitor/test/monitor_0532_normalised
+monitor/test/monitor_0563_normalised
+cone/train/cone_0084_normalised
+cone/train/cone_0121_normalised
+cone/train/cone_0017_normalised
+cone/train/cone_0118_normalised
+cone/train/cone_0147_normalised
+cone/train/cone_0162_normalised
+cone/train/cone_0141_normalised
+cone/train/cone_0112_normalised
+cone/train/cone_0167_normalised
+cone/train/cone_0166_normalised
+cone/train/cone_0119_normalised
+cone/train/cone_0128_normalised
+cone/train/cone_0050_normalised
+cone/train/cone_0061_normalised
+cone/train/cone_0014_normalised
+cone/train/cone_0132_normalised
+cone/train/cone_0103_normalised
+cone/train/cone_0148_normalised
+cone/train/cone_0060_normalised
+cone/train/cone_0107_normalised
+cone/train/cone_0042_normalised
+cone/train/cone_0057_normalised
+cone/train/cone_0041_normalised
+cone/train/cone_0037_normalised
+cone/train/cone_0055_normalised
+cone/train/cone_0140_normalised
+cone/train/cone_0159_normalised
+cone/train/cone_0083_normalised
+cone/train/cone_0086_normalised
+cone/train/cone_0163_normalised
+cone/train/cone_0016_normalised
+cone/train/cone_0080_normalised
+cone/train/cone_0040_normalised
+cone/train/cone_0032_normalised
+cone/train/cone_0097_normalised
+cone/train/cone_0105_normalised
+cone/train/cone_0150_normalised
+cone/train/cone_0092_normalised
+cone/train/cone_0048_normalised
+cone/train/cone_0054_normalised
+cone/train/cone_0106_normalised
+cone/train/cone_0018_normalised
+cone/train/cone_0089_normalised
+cone/train/cone_0123_normalised
+cone/train/cone_0051_normalised
+cone/train/cone_0052_normalised
+cone/train/cone_0033_normalised
+cone/train/cone_0088_normalised
+cone/train/cone_0028_normalised
+cone/train/cone_0063_normalised
+cone/train/cone_0143_normalised
+cone/train/cone_0029_normalised
+cone/train/cone_0144_normalised
+cone/train/cone_0049_normalised
+cone/train/cone_0134_normalised
+cone/train/cone_0023_normalised
+cone/train/cone_0066_normalised
+cone/train/cone_0153_normalised
+cone/train/cone_0021_normalised
+cone/train/cone_0013_normalised
+cone/train/cone_0098_normalised
+cone/train/cone_0031_normalised
+cone/train/cone_0120_normalised
+cone/train/cone_0034_normalised
+cone/train/cone_0043_normalised
+cone/train/cone_0056_normalised
+cone/train/cone_0068_normalised
+cone/train/cone_0087_normalised
+cone/train/cone_0161_normalised
+cone/train/cone_0038_normalised
+cone/train/cone_0093_normalised
+cone/train/cone_0067_normalised
+cone/train/cone_0076_normalised
+cone/train/cone_0094_normalised
+cone/train/cone_0136_normalised
+cone/train/cone_0065_normalised
+cone/train/cone_0026_normalised
+cone/train/cone_0126_normalised
+cone/train/cone_0149_normalised
+cone/train/cone_0025_normalised
+cone/train/cone_0122_normalised
+cone/train/cone_0027_normalised
+cone/train/cone_0006_normalised
+cone/train/cone_0130_normalised
+cone/train/cone_0059_normalised
+cone/train/cone_0045_normalised
+cone/train/cone_0079_normalised
+cone/train/cone_0090_normalised
+cone/train/cone_0104_normalised
+cone/train/cone_0078_normalised
+cone/train/cone_0070_normalised
+cone/train/cone_0002_normalised
+cone/train/cone_0010_normalised
+cone/train/cone_0001_normalised
+cone/train/cone_0091_normalised
+cone/train/cone_0146_normalised
+cone/train/cone_0101_normalised
+cone/train/cone_0073_normalised
+cone/train/cone_0155_normalised
+cone/train/cone_0009_normalised
+cone/train/cone_0062_normalised
+cone/train/cone_0137_normalised
+cone/train/cone_0005_normalised
+cone/train/cone_0109_normalised
+cone/train/cone_0110_normalised
+cone/train/cone_0154_normalised
+cone/train/cone_0053_normalised
+cone/train/cone_0003_normalised
+cone/train/cone_0035_normalised
+cone/train/cone_0111_normalised
+cone/train/cone_0139_normalised
+cone/train/cone_0047_normalised
+cone/train/cone_0125_normalised
+cone/train/cone_0058_normalised
+cone/train/cone_0075_normalised
+cone/train/cone_0015_normalised
+cone/train/cone_0072_normalised
+cone/train/cone_0133_normalised
+cone/train/cone_0007_normalised
+cone/train/cone_0165_normalised
+cone/train/cone_0036_normalised
+cone/train/cone_0004_normalised
+cone/train/cone_0164_normalised
+cone/train/cone_0113_normalised
+cone/train/cone_0082_normalised
+cone/train/cone_0030_normalised
+cone/train/cone_0145_normalised
+cone/train/cone_0069_normalised
+cone/train/cone_0129_normalised
+cone/train/cone_0074_normalised
+cone/train/cone_0081_normalised
+cone/train/cone_0135_normalised
+cone/train/cone_0046_normalised
+cone/train/cone_0127_normalised
+cone/train/cone_0100_normalised
+cone/train/cone_0124_normalised
+cone/train/cone_0108_normalised
+cone/train/cone_0039_normalised
+cone/train/cone_0115_normalised
+cone/train/cone_0116_normalised
+cone/train/cone_0102_normalised
+cone/train/cone_0096_normalised
+cone/train/cone_0085_normalised
+cone/train/cone_0008_normalised
+cone/train/cone_0114_normalised
+cone/train/cone_0160_normalised
+cone/train/cone_0019_normalised
+cone/train/cone_0020_normalised
+cone/train/cone_0152_normalised
+cone/train/cone_0117_normalised
+cone/train/cone_0158_normalised
+cone/train/cone_0077_normalised
+cone/train/cone_0131_normalised
+cone/train/cone_0138_normalised
+cone/train/cone_0151_normalised
+cone/train/cone_0012_normalised
+cone/train/cone_0044_normalised
+cone/train/cone_0099_normalised
+cone/train/cone_0022_normalised
+cone/train/cone_0064_normalised
+cone/train/cone_0157_normalised
+cone/train/cone_0011_normalised
+cone/train/cone_0142_normalised
+cone/train/cone_0024_normalised
+cone/train/cone_0095_normalised
+cone/train/cone_0071_normalised
+cone/train/cone_0156_normalised
+cone/test/cone_0178_normalised
+cone/test/cone_0172_normalised
+cone/test/cone_0173_normalised
+cone/test/cone_0171_normalised
+cone/test/cone_0185_normalised
+cone/test/cone_0168_normalised
+cone/test/cone_0186_normalised
+cone/test/cone_0175_normalised
+cone/test/cone_0187_normalised
+cone/test/cone_0174_normalised
+cone/test/cone_0177_normalised
+cone/test/cone_0183_normalised
+cone/test/cone_0179_normalised
+cone/test/cone_0181_normalised
+cone/test/cone_0180_normalised
+cone/test/cone_0176_normalised
+cone/test/cone_0184_normalised
+cone/test/cone_0182_normalised
+cone/test/cone_0169_normalised
+cone/test/cone_0170_normalised
+piano/train/piano_0139_normalised
+piano/train/piano_0122_normalised
+piano/train/piano_0004_normalised
+piano/train/piano_0003_normalised
+piano/train/piano_0115_normalised
+piano/train/piano_0117_normalised
+piano/train/piano_0184_normalised
+piano/train/piano_0062_normalised
+piano/train/piano_0098_normalised
+piano/train/piano_0045_normalised
+piano/train/piano_0221_normalised
+piano/train/piano_0227_normalised
+piano/train/piano_0224_normalised
+piano/train/piano_0130_normalised
+piano/train/piano_0136_normalised
+piano/train/piano_0075_normalised
+piano/train/piano_0015_normalised
+piano/train/piano_0168_normalised
+piano/train/piano_0048_normalised
+piano/train/piano_0101_normalised
+piano/train/piano_0171_normalised
+piano/train/piano_0203_normalised
+piano/train/piano_0135_normalised
+piano/train/piano_0215_normalised
+piano/train/piano_0206_normalised
+piano/train/piano_0167_normalised
+piano/train/piano_0088_normalised
+piano/train/piano_0005_normalised
+piano/train/piano_0137_normalised
+piano/train/piano_0060_normalised
+piano/train/piano_0047_normalised
+piano/train/piano_0163_normalised
+piano/train/piano_0159_normalised
+piano/train/piano_0046_normalised
+piano/train/piano_0185_normalised
+piano/train/piano_0176_normalised
+piano/train/piano_0020_normalised
+piano/train/piano_0077_normalised
+piano/train/piano_0129_normalised
+piano/train/piano_0002_normalised
+piano/train/piano_0140_normalised
+piano/train/piano_0051_normalised
+piano/train/piano_0052_normalised
+piano/train/piano_0170_normalised
+piano/train/piano_0123_normalised
+piano/train/piano_0065_normalised
+piano/train/piano_0126_normalised
+piano/train/piano_0009_normalised
+piano/train/piano_0111_normalised
+piano/train/piano_0012_normalised
+piano/train/piano_0109_normalised
+piano/train/piano_0008_normalised
+piano/train/piano_0090_normalised
+piano/train/piano_0013_normalised
+piano/train/piano_0225_normalised
+piano/train/piano_0084_normalised
+piano/train/piano_0202_normalised
+piano/train/piano_0162_normalised
+piano/train/piano_0076_normalised
+piano/train/piano_0174_normalised
+piano/train/piano_0128_normalised
+piano/train/piano_0106_normalised
+piano/train/piano_0169_normalised
+piano/train/piano_0161_normalised
+piano/train/piano_0016_normalised
+piano/train/piano_0138_normalised
+piano/train/piano_0056_normalised
+piano/train/piano_0131_normalised
+piano/train/piano_0007_normalised
+piano/train/piano_0041_normalised
+piano/train/piano_0053_normalised
+piano/train/piano_0193_normalised
+piano/train/piano_0034_normalised
+piano/train/piano_0160_normalised
+piano/train/piano_0144_normalised
+piano/train/piano_0067_normalised
+piano/train/piano_0023_normalised
+piano/train/piano_0081_normalised
+piano/train/piano_0156_normalised
+piano/train/piano_0042_normalised
+piano/train/piano_0145_normalised
+piano/train/piano_0212_normalised
+piano/train/piano_0173_normalised
+piano/train/piano_0142_normalised
+piano/train/piano_0116_normalised
+piano/train/piano_0022_normalised
+piano/train/piano_0208_normalised
+piano/train/piano_0014_normalised
+piano/train/piano_0229_normalised
+piano/train/piano_0124_normalised
+piano/train/piano_0112_normalised
+piano/train/piano_0228_normalised
+piano/train/piano_0019_normalised
+piano/train/piano_0057_normalised
+piano/train/piano_0149_normalised
+piano/train/piano_0061_normalised
+piano/train/piano_0150_normalised
+piano/train/piano_0165_normalised
+piano/train/piano_0092_normalised
+piano/train/piano_0134_normalised
+piano/train/piano_0172_normalised
+piano/train/piano_0018_normalised
+piano/train/piano_0213_normalised
+piano/train/piano_0113_normalised
+piano/train/piano_0079_normalised
+piano/train/piano_0209_normalised
+piano/train/piano_0082_normalised
+piano/train/piano_0189_normalised
+piano/train/piano_0103_normalised
+piano/train/piano_0100_normalised
+piano/train/piano_0155_normalised
+piano/train/piano_0063_normalised
+piano/train/piano_0073_normalised
+piano/train/piano_0039_normalised
+piano/train/piano_0044_normalised
+piano/train/piano_0205_normalised
+piano/train/piano_0070_normalised
+piano/train/piano_0151_normalised
+piano/train/piano_0217_normalised
+piano/train/piano_0141_normalised
+piano/train/piano_0231_normalised
+piano/train/piano_0078_normalised
+piano/train/piano_0099_normalised
+piano/train/piano_0180_normalised
+piano/train/piano_0119_normalised
+piano/train/piano_0219_normalised
+piano/train/piano_0154_normalised
+piano/train/piano_0210_normalised
+piano/train/piano_0001_normalised
+piano/train/piano_0181_normalised
+piano/train/piano_0083_normalised
+piano/train/piano_0146_normalised
+piano/train/piano_0031_normalised
+piano/train/piano_0199_normalised
+piano/train/piano_0021_normalised
+piano/train/piano_0096_normalised
+piano/train/piano_0069_normalised
+piano/train/piano_0035_normalised
+piano/train/piano_0179_normalised
+piano/train/piano_0214_normalised
+piano/train/piano_0158_normalised
+piano/train/piano_0108_normalised
+piano/train/piano_0166_normalised
+piano/train/piano_0105_normalised
+piano/train/piano_0107_normalised
+piano/train/piano_0094_normalised
+piano/train/piano_0091_normalised
+piano/train/piano_0192_normalised
+piano/train/piano_0133_normalised
+piano/train/piano_0074_normalised
+piano/train/piano_0049_normalised
+piano/train/piano_0072_normalised
+piano/train/piano_0071_normalised
+piano/train/piano_0147_normalised
+piano/train/piano_0029_normalised
+piano/train/piano_0152_normalised
+piano/train/piano_0037_normalised
+piano/train/piano_0043_normalised
+piano/train/piano_0087_normalised
+piano/train/piano_0204_normalised
+piano/train/piano_0207_normalised
+piano/train/piano_0085_normalised
+piano/train/piano_0038_normalised
+piano/train/piano_0095_normalised
+piano/train/piano_0006_normalised
+piano/train/piano_0068_normalised
+piano/train/piano_0177_normalised
+piano/train/piano_0183_normalised
+piano/train/piano_0190_normalised
+piano/train/piano_0157_normalised
+piano/train/piano_0032_normalised
+piano/train/piano_0194_normalised
+piano/train/piano_0050_normalised
+piano/train/piano_0110_normalised
+piano/train/piano_0196_normalised
+piano/train/piano_0195_normalised
+piano/train/piano_0027_normalised
+piano/train/piano_0187_normalised
+piano/train/piano_0223_normalised
+piano/train/piano_0030_normalised
+piano/train/piano_0114_normalised
+piano/train/piano_0127_normalised
+piano/train/piano_0143_normalised
+piano/train/piano_0188_normalised
+piano/train/piano_0200_normalised
+piano/train/piano_0125_normalised
+piano/train/piano_0104_normalised
+piano/train/piano_0132_normalised
+piano/train/piano_0024_normalised
+piano/train/piano_0121_normalised
+piano/train/piano_0080_normalised
+piano/train/piano_0097_normalised
+piano/train/piano_0026_normalised
+piano/train/piano_0040_normalised
+piano/train/piano_0201_normalised
+piano/train/piano_0182_normalised
+piano/train/piano_0175_normalised
+piano/train/piano_0153_normalised
+piano/train/piano_0059_normalised
+piano/train/piano_0120_normalised
+piano/train/piano_0033_normalised
+piano/train/piano_0064_normalised
+piano/train/piano_0066_normalised
+piano/train/piano_0028_normalised
+piano/train/piano_0010_normalised
+piano/train/piano_0178_normalised
+piano/train/piano_0191_normalised
+piano/train/piano_0054_normalised
+piano/train/piano_0102_normalised
+piano/train/piano_0055_normalised
+piano/train/piano_0025_normalised
+piano/train/piano_0220_normalised
+piano/train/piano_0211_normalised
+piano/train/piano_0086_normalised
+piano/train/piano_0148_normalised
+piano/train/piano_0222_normalised
+piano/train/piano_0198_normalised
+piano/train/piano_0230_normalised
+piano/train/piano_0216_normalised
+piano/train/piano_0186_normalised
+piano/train/piano_0118_normalised
+piano/train/piano_0036_normalised
+piano/train/piano_0011_normalised
+piano/train/piano_0089_normalised
+piano/train/piano_0197_normalised
+piano/train/piano_0017_normalised
+piano/train/piano_0093_normalised
+piano/train/piano_0226_normalised
+piano/train/piano_0058_normalised
+piano/train/piano_0218_normalised
+piano/train/piano_0164_normalised
+piano/test/piano_0282_normalised
+piano/test/piano_0328_normalised
+piano/test/piano_0234_normalised
+piano/test/piano_0317_normalised
+piano/test/piano_0295_normalised
+piano/test/piano_0281_normalised
+piano/test/piano_0302_normalised
+piano/test/piano_0258_normalised
+piano/test/piano_0287_normalised
+piano/test/piano_0267_normalised
+piano/test/piano_0293_normalised
+piano/test/piano_0253_normalised
+piano/test/piano_0313_normalised
+piano/test/piano_0326_normalised
+piano/test/piano_0299_normalised
+piano/test/piano_0278_normalised
+piano/test/piano_0312_normalised
+piano/test/piano_0248_normalised
+piano/test/piano_0314_normalised
+piano/test/piano_0305_normalised
+piano/test/piano_0320_normalised
+piano/test/piano_0310_normalised
+piano/test/piano_0241_normalised
+piano/test/piano_0296_normalised
+piano/test/piano_0284_normalised
+piano/test/piano_0270_normalised
+piano/test/piano_0297_normalised
+piano/test/piano_0307_normalised
+piano/test/piano_0294_normalised
+piano/test/piano_0243_normalised
+piano/test/piano_0247_normalised
+piano/test/piano_0285_normalised
+piano/test/piano_0286_normalised
+piano/test/piano_0323_normalised
+piano/test/piano_0275_normalised
+piano/test/piano_0260_normalised
+piano/test/piano_0252_normalised
+piano/test/piano_0259_normalised
+piano/test/piano_0311_normalised
+piano/test/piano_0239_normalised
+piano/test/piano_0290_normalised
+piano/test/piano_0322_normalised
+piano/test/piano_0262_normalised
+piano/test/piano_0318_normalised
+piano/test/piano_0265_normalised
+piano/test/piano_0233_normalised
+piano/test/piano_0232_normalised
+piano/test/piano_0254_normalised
+piano/test/piano_0246_normalised
+piano/test/piano_0292_normalised
+piano/test/piano_0288_normalised
+piano/test/piano_0303_normalised
+piano/test/piano_0263_normalised
+piano/test/piano_0236_normalised
+piano/test/piano_0264_normalised
+piano/test/piano_0316_normalised
+piano/test/piano_0325_normalised
+piano/test/piano_0250_normalised
+piano/test/piano_0283_normalised
+piano/test/piano_0272_normalised
+piano/test/piano_0315_normalised
+piano/test/piano_0245_normalised
+piano/test/piano_0321_normalised
+piano/test/piano_0319_normalised
+piano/test/piano_0255_normalised
+piano/test/piano_0276_normalised
+piano/test/piano_0331_normalised
+piano/test/piano_0309_normalised
+piano/test/piano_0327_normalised
+piano/test/piano_0257_normalised
+piano/test/piano_0242_normalised
+piano/test/piano_0240_normalised
+piano/test/piano_0269_normalised
+piano/test/piano_0268_normalised
+piano/test/piano_0280_normalised
+piano/test/piano_0244_normalised
+piano/test/piano_0289_normalised
+piano/test/piano_0300_normalised
+piano/test/piano_0279_normalised
+piano/test/piano_0251_normalised
+piano/test/piano_0256_normalised
+piano/test/piano_0274_normalised
+piano/test/piano_0249_normalised
+piano/test/piano_0324_normalised
+piano/test/piano_0237_normalised
+piano/test/piano_0261_normalised
+piano/test/piano_0238_normalised
+piano/test/piano_0291_normalised
+piano/test/piano_0330_normalised
+piano/test/piano_0298_normalised
+piano/test/piano_0277_normalised
+piano/test/piano_0271_normalised
+piano/test/piano_0273_normalised
+piano/test/piano_0266_normalised
+piano/test/piano_0304_normalised
+piano/test/piano_0306_normalised
+piano/test/piano_0301_normalised
+piano/test/piano_0308_normalised
+piano/test/piano_0329_normalised
+piano/test/piano_0235_normalised
+keyboard/train/keyboard_0122_normalised
+keyboard/train/keyboard_0144_normalised
+keyboard/train/keyboard_0143_normalised
+keyboard/train/keyboard_0130_normalised
+keyboard/train/keyboard_0071_normalised
+keyboard/train/keyboard_0014_normalised
+keyboard/train/keyboard_0021_normalised
+keyboard/train/keyboard_0125_normalised
+keyboard/train/keyboard_0103_normalised
+keyboard/train/keyboard_0032_normalised
+keyboard/train/keyboard_0061_normalised
+keyboard/train/keyboard_0015_normalised
+keyboard/train/keyboard_0054_normalised
+keyboard/train/keyboard_0006_normalised
+keyboard/train/keyboard_0069_normalised
+keyboard/train/keyboard_0104_normalised
+keyboard/train/keyboard_0093_normalised
+keyboard/train/keyboard_0034_normalised
+keyboard/train/keyboard_0051_normalised
+keyboard/train/keyboard_0057_normalised
+keyboard/train/keyboard_0120_normalised
+keyboard/train/keyboard_0058_normalised
+keyboard/train/keyboard_0074_normalised
+keyboard/train/keyboard_0090_normalised
+keyboard/train/keyboard_0095_normalised
+keyboard/train/keyboard_0113_normalised
+keyboard/train/keyboard_0097_normalised
+keyboard/train/keyboard_0042_normalised
+keyboard/train/keyboard_0003_normalised
+keyboard/train/keyboard_0001_normalised
+keyboard/train/keyboard_0128_normalised
+keyboard/train/keyboard_0108_normalised
+keyboard/train/keyboard_0009_normalised
+keyboard/train/keyboard_0065_normalised
+keyboard/train/keyboard_0138_normalised
+keyboard/train/keyboard_0029_normalised
+keyboard/train/keyboard_0088_normalised
+keyboard/train/keyboard_0127_normalised
+keyboard/train/keyboard_0012_normalised
+keyboard/train/keyboard_0041_normalised
+keyboard/train/keyboard_0081_normalised
+keyboard/train/keyboard_0022_normalised
+keyboard/train/keyboard_0089_normalised
+keyboard/train/keyboard_0115_normalised
+keyboard/train/keyboard_0075_normalised
+keyboard/train/keyboard_0076_normalised
+keyboard/train/keyboard_0017_normalised
+keyboard/train/keyboard_0100_normalised
+keyboard/train/keyboard_0101_normalised
+keyboard/train/keyboard_0141_normalised
+keyboard/train/keyboard_0142_normalised
+keyboard/train/keyboard_0023_normalised
+keyboard/train/keyboard_0027_normalised
+keyboard/train/keyboard_0136_normalised
+keyboard/train/keyboard_0094_normalised
+keyboard/train/keyboard_0039_normalised
+keyboard/train/keyboard_0018_normalised
+keyboard/train/keyboard_0082_normalised
+keyboard/train/keyboard_0020_normalised
+keyboard/train/keyboard_0109_normalised
+keyboard/train/keyboard_0112_normalised
+keyboard/train/keyboard_0077_normalised
+keyboard/train/keyboard_0055_normalised
+keyboard/train/keyboard_0086_normalised
+keyboard/train/keyboard_0121_normalised
+keyboard/train/keyboard_0035_normalised
+keyboard/train/keyboard_0028_normalised
+keyboard/train/keyboard_0053_normalised
+keyboard/train/keyboard_0005_normalised
+keyboard/train/keyboard_0080_normalised
+keyboard/train/keyboard_0126_normalised
+keyboard/train/keyboard_0016_normalised
+keyboard/train/keyboard_0117_normalised
+keyboard/train/keyboard_0132_normalised
+keyboard/train/keyboard_0083_normalised
+keyboard/train/keyboard_0114_normalised
+keyboard/train/keyboard_0040_normalised
+keyboard/train/keyboard_0107_normalised
+keyboard/train/keyboard_0106_normalised
+keyboard/train/keyboard_0131_normalised
+keyboard/train/keyboard_0079_normalised
+keyboard/train/keyboard_0102_normalised
+keyboard/train/keyboard_0073_normalised
+keyboard/train/keyboard_0048_normalised
+keyboard/train/keyboard_0066_normalised
+keyboard/train/keyboard_0026_normalised
+keyboard/train/keyboard_0092_normalised
+keyboard/train/keyboard_0031_normalised
+keyboard/train/keyboard_0099_normalised
+keyboard/train/keyboard_0024_normalised
+keyboard/train/keyboard_0110_normalised
+keyboard/train/keyboard_0011_normalised
+keyboard/train/keyboard_0137_normalised
+keyboard/train/keyboard_0105_normalised
+keyboard/train/keyboard_0134_normalised
+keyboard/train/keyboard_0067_normalised
+keyboard/train/keyboard_0129_normalised
+keyboard/train/keyboard_0052_normalised
+keyboard/train/keyboard_0133_normalised
+keyboard/train/keyboard_0119_normalised
+keyboard/train/keyboard_0004_normalised
+keyboard/train/keyboard_0084_normalised
+keyboard/train/keyboard_0013_normalised
+keyboard/train/keyboard_0118_normalised
+keyboard/train/keyboard_0096_normalised
+keyboard/train/keyboard_0025_normalised
+keyboard/train/keyboard_0038_normalised
+keyboard/train/keyboard_0043_normalised
+keyboard/train/keyboard_0145_normalised
+keyboard/train/keyboard_0068_normalised
+keyboard/train/keyboard_0036_normalised
+keyboard/train/keyboard_0087_normalised
+keyboard/train/keyboard_0008_normalised
+keyboard/train/keyboard_0123_normalised
+keyboard/train/keyboard_0046_normalised
+keyboard/train/keyboard_0030_normalised
+keyboard/train/keyboard_0019_normalised
+keyboard/train/keyboard_0060_normalised
+keyboard/train/keyboard_0072_normalised
+keyboard/train/keyboard_0085_normalised
+keyboard/train/keyboard_0010_normalised
+keyboard/train/keyboard_0135_normalised
+keyboard/train/keyboard_0063_normalised
+keyboard/train/keyboard_0007_normalised
+keyboard/train/keyboard_0056_normalised
+keyboard/train/keyboard_0064_normalised
+keyboard/train/keyboard_0091_normalised
+keyboard/train/keyboard_0033_normalised
+keyboard/train/keyboard_0044_normalised
+keyboard/train/keyboard_0070_normalised
+keyboard/train/keyboard_0111_normalised
+keyboard/train/keyboard_0050_normalised
+keyboard/train/keyboard_0047_normalised
+keyboard/train/keyboard_0140_normalised
+keyboard/train/keyboard_0078_normalised
+keyboard/train/keyboard_0059_normalised
+keyboard/train/keyboard_0139_normalised
+keyboard/train/keyboard_0037_normalised
+keyboard/train/keyboard_0062_normalised
+keyboard/train/keyboard_0116_normalised
+keyboard/train/keyboard_0049_normalised
+keyboard/train/keyboard_0045_normalised
+keyboard/train/keyboard_0124_normalised
+keyboard/train/keyboard_0002_normalised
+keyboard/train/keyboard_0098_normalised
+keyboard/test/keyboard_0158_normalised
+keyboard/test/keyboard_0165_normalised
+keyboard/test/keyboard_0157_normalised
+keyboard/test/keyboard_0160_normalised
+keyboard/test/keyboard_0150_normalised
+keyboard/test/keyboard_0151_normalised
+keyboard/test/keyboard_0163_normalised
+keyboard/test/keyboard_0153_normalised
+keyboard/test/keyboard_0162_normalised
+keyboard/test/keyboard_0152_normalised
+keyboard/test/keyboard_0155_normalised
+keyboard/test/keyboard_0164_normalised
+keyboard/test/keyboard_0146_normalised
+keyboard/test/keyboard_0161_normalised
+keyboard/test/keyboard_0149_normalised
+keyboard/test/keyboard_0159_normalised
+keyboard/test/keyboard_0156_normalised
+keyboard/test/keyboard_0154_normalised
+keyboard/test/keyboard_0147_normalised
+keyboard/test/keyboard_0148_normalised
+guitar/train/guitar_0127_normalised
+guitar/train/guitar_0073_normalised
+guitar/train/guitar_0063_normalised
+guitar/train/guitar_0016_normalised
+guitar/train/guitar_0029_normalised
+guitar/train/guitar_0109_normalised
+guitar/train/guitar_0039_normalised
+guitar/train/guitar_0065_normalised
+guitar/train/guitar_0035_normalised
+guitar/train/guitar_0096_normalised
+guitar/train/guitar_0111_normalised
+guitar/train/guitar_0137_normalised
+guitar/train/guitar_0069_normalised
+guitar/train/guitar_0125_normalised
+guitar/train/guitar_0026_normalised
+guitar/train/guitar_0116_normalised
+guitar/train/guitar_0133_normalised
+guitar/train/guitar_0086_normalised
+guitar/train/guitar_0041_normalised
+guitar/train/guitar_0151_normalised
+guitar/train/guitar_0131_normalised
+guitar/train/guitar_0130_normalised
+guitar/train/guitar_0018_normalised
+guitar/train/guitar_0019_normalised
+guitar/train/guitar_0152_normalised
+guitar/train/guitar_0036_normalised
+guitar/train/guitar_0107_normalised
+guitar/train/guitar_0059_normalised
+guitar/train/guitar_0044_normalised
+guitar/train/guitar_0033_normalised
+guitar/train/guitar_0129_normalised
+guitar/train/guitar_0141_normalised
+guitar/train/guitar_0061_normalised
+guitar/train/guitar_0022_normalised
+guitar/train/guitar_0046_normalised
+guitar/train/guitar_0034_normalised
+guitar/train/guitar_0124_normalised
+guitar/train/guitar_0020_normalised
+guitar/train/guitar_0089_normalised
+guitar/train/guitar_0082_normalised
+guitar/train/guitar_0045_normalised
+guitar/train/guitar_0012_normalised
+guitar/train/guitar_0114_normalised
+guitar/train/guitar_0076_normalised
+guitar/train/guitar_0098_normalised
+guitar/train/guitar_0083_normalised
+guitar/train/guitar_0060_normalised
+guitar/train/guitar_0055_normalised
+guitar/train/guitar_0027_normalised
+guitar/train/guitar_0110_normalised
+guitar/train/guitar_0074_normalised
+guitar/train/guitar_0101_normalised
+guitar/train/guitar_0052_normalised
+guitar/train/guitar_0150_normalised
+guitar/train/guitar_0084_normalised
+guitar/train/guitar_0139_normalised
+guitar/train/guitar_0108_normalised
+guitar/train/guitar_0138_normalised
+guitar/train/guitar_0104_normalised
+guitar/train/guitar_0062_normalised
+guitar/train/guitar_0153_normalised
+guitar/train/guitar_0068_normalised
+guitar/train/guitar_0093_normalised
+guitar/train/guitar_0066_normalised
+guitar/train/guitar_0140_normalised
+guitar/train/guitar_0117_normalised
+guitar/train/guitar_0080_normalised
+guitar/train/guitar_0067_normalised
+guitar/train/guitar_0128_normalised
+guitar/train/guitar_0103_normalised
+guitar/train/guitar_0023_normalised
+guitar/train/guitar_0113_normalised
+guitar/train/guitar_0070_normalised
+guitar/train/guitar_0054_normalised
+guitar/train/guitar_0079_normalised
+guitar/train/guitar_0006_normalised
+guitar/train/guitar_0088_normalised
+guitar/train/guitar_0031_normalised
+guitar/train/guitar_0105_normalised
+guitar/train/guitar_0040_normalised
+guitar/train/guitar_0090_normalised
+guitar/train/guitar_0102_normalised
+guitar/train/guitar_0147_normalised
+guitar/train/guitar_0038_normalised
+guitar/train/guitar_0025_normalised
+guitar/train/guitar_0043_normalised
+guitar/train/guitar_0094_normalised
+guitar/train/guitar_0028_normalised
+guitar/train/guitar_0119_normalised
+guitar/train/guitar_0011_normalised
+guitar/train/guitar_0009_normalised
+guitar/train/guitar_0072_normalised
+guitar/train/guitar_0132_normalised
+guitar/train/guitar_0053_normalised
+guitar/train/guitar_0120_normalised
+guitar/train/guitar_0142_normalised
+guitar/train/guitar_0135_normalised
+guitar/train/guitar_0097_normalised
+guitar/train/guitar_0148_normalised
+guitar/train/guitar_0099_normalised
+guitar/train/guitar_0001_normalised
+guitar/train/guitar_0030_normalised
+guitar/train/guitar_0136_normalised
+guitar/train/guitar_0056_normalised
+guitar/train/guitar_0047_normalised
+guitar/train/guitar_0021_normalised
+guitar/train/guitar_0014_normalised
+guitar/train/guitar_0095_normalised
+guitar/train/guitar_0051_normalised
+guitar/train/guitar_0057_normalised
+guitar/train/guitar_0017_normalised
+guitar/train/guitar_0007_normalised
+guitar/train/guitar_0005_normalised
+guitar/train/guitar_0049_normalised
+guitar/train/guitar_0048_normalised
+guitar/train/guitar_0003_normalised
+guitar/train/guitar_0024_normalised
+guitar/train/guitar_0075_normalised
+guitar/train/guitar_0126_normalised
+guitar/train/guitar_0078_normalised
+guitar/train/guitar_0092_normalised
+guitar/train/guitar_0002_normalised
+guitar/train/guitar_0085_normalised
+guitar/train/guitar_0154_normalised
+guitar/train/guitar_0112_normalised
+guitar/train/guitar_0058_normalised
+guitar/train/guitar_0010_normalised
+guitar/train/guitar_0122_normalised
+guitar/train/guitar_0123_normalised
+guitar/train/guitar_0032_normalised
+guitar/train/guitar_0064_normalised
+guitar/train/guitar_0008_normalised
+guitar/train/guitar_0143_normalised
+guitar/train/guitar_0037_normalised
+guitar/train/guitar_0004_normalised
+guitar/train/guitar_0146_normalised
+guitar/train/guitar_0050_normalised
+guitar/train/guitar_0106_normalised
+guitar/train/guitar_0118_normalised
+guitar/train/guitar_0145_normalised
+guitar/train/guitar_0015_normalised
+guitar/train/guitar_0081_normalised
+guitar/train/guitar_0013_normalised
+guitar/train/guitar_0100_normalised
+guitar/train/guitar_0115_normalised
+guitar/train/guitar_0144_normalised
+guitar/train/guitar_0077_normalised
+guitar/train/guitar_0121_normalised
+guitar/train/guitar_0071_normalised
+guitar/train/guitar_0134_normalised
+guitar/train/guitar_0155_normalised
+guitar/train/guitar_0087_normalised
+guitar/train/guitar_0042_normalised
+guitar/train/guitar_0149_normalised
+guitar/train/guitar_0091_normalised
+guitar/test/guitar_0161_normalised
+guitar/test/guitar_0204_normalised
+guitar/test/guitar_0169_normalised
+guitar/test/guitar_0197_normalised
+guitar/test/guitar_0202_normalised
+guitar/test/guitar_0215_normalised
+guitar/test/guitar_0242_normalised
+guitar/test/guitar_0244_normalised
+guitar/test/guitar_0222_normalised
+guitar/test/guitar_0175_normalised
+guitar/test/guitar_0214_normalised
+guitar/test/guitar_0190_normalised
+guitar/test/guitar_0201_normalised
+guitar/test/guitar_0246_normalised
+guitar/test/guitar_0164_normalised
+guitar/test/guitar_0213_normalised
+guitar/test/guitar_0228_normalised
+guitar/test/guitar_0173_normalised
+guitar/test/guitar_0207_normalised
+guitar/test/guitar_0248_normalised
+guitar/test/guitar_0189_normalised
+guitar/test/guitar_0192_normalised
+guitar/test/guitar_0255_normalised
+guitar/test/guitar_0177_normalised
+guitar/test/guitar_0251_normalised
+guitar/test/guitar_0182_normalised
+guitar/test/guitar_0188_normalised
+guitar/test/guitar_0157_normalised
+guitar/test/guitar_0212_normalised
+guitar/test/guitar_0176_normalised
+guitar/test/guitar_0162_normalised
+guitar/test/guitar_0241_normalised
+guitar/test/guitar_0236_normalised
+guitar/test/guitar_0240_normalised
+guitar/test/guitar_0219_normalised
+guitar/test/guitar_0250_normalised
+guitar/test/guitar_0171_normalised
+guitar/test/guitar_0167_normalised
+guitar/test/guitar_0184_normalised
+guitar/test/guitar_0218_normalised
+guitar/test/guitar_0245_normalised
+guitar/test/guitar_0234_normalised
+guitar/test/guitar_0174_normalised
+guitar/test/guitar_0224_normalised
+guitar/test/guitar_0205_normalised
+guitar/test/guitar_0196_normalised
+guitar/test/guitar_0217_normalised
+guitar/test/guitar_0203_normalised
+guitar/test/guitar_0230_normalised
+guitar/test/guitar_0195_normalised
+guitar/test/guitar_0249_normalised
+guitar/test/guitar_0226_normalised
+guitar/test/guitar_0183_normalised
+guitar/test/guitar_0229_normalised
+guitar/test/guitar_0252_normalised
+guitar/test/guitar_0194_normalised
+guitar/test/guitar_0238_normalised
+guitar/test/guitar_0247_normalised
+guitar/test/guitar_0199_normalised
+guitar/test/guitar_0227_normalised
+guitar/test/guitar_0209_normalised
+guitar/test/guitar_0186_normalised
+guitar/test/guitar_0216_normalised
+guitar/test/guitar_0159_normalised
+guitar/test/guitar_0200_normalised
+guitar/test/guitar_0232_normalised
+guitar/test/guitar_0172_normalised
+guitar/test/guitar_0233_normalised
+guitar/test/guitar_0163_normalised
+guitar/test/guitar_0225_normalised
+guitar/test/guitar_0231_normalised
+guitar/test/guitar_0243_normalised
+guitar/test/guitar_0170_normalised
+guitar/test/guitar_0156_normalised
+guitar/test/guitar_0220_normalised
+guitar/test/guitar_0179_normalised
+guitar/test/guitar_0239_normalised
+guitar/test/guitar_0191_normalised
+guitar/test/guitar_0254_normalised
+guitar/test/guitar_0168_normalised
+guitar/test/guitar_0198_normalised
+guitar/test/guitar_0158_normalised
+guitar/test/guitar_0206_normalised
+guitar/test/guitar_0210_normalised
+guitar/test/guitar_0208_normalised
+guitar/test/guitar_0160_normalised
+guitar/test/guitar_0178_normalised
+guitar/test/guitar_0193_normalised
+guitar/test/guitar_0185_normalised
+guitar/test/guitar_0165_normalised
+guitar/test/guitar_0221_normalised
+guitar/test/guitar_0235_normalised
+guitar/test/guitar_0223_normalised
+guitar/test/guitar_0253_normalised
+guitar/test/guitar_0187_normalised
+guitar/test/guitar_0180_normalised
+guitar/test/guitar_0181_normalised
+guitar/test/guitar_0211_normalised
+guitar/test/guitar_0166_normalised
+guitar/test/guitar_0237_normalised
+night_stand/train/night_stand_0163_normalised
+night_stand/train/night_stand_0065_normalised
+night_stand/train/night_stand_0069_normalised
+night_stand/train/night_stand_0091_normalised
+night_stand/train/night_stand_0067_normalised
+night_stand/train/night_stand_0096_normalised
+night_stand/train/night_stand_0129_normalised
+night_stand/train/night_stand_0119_normalised
+night_stand/train/night_stand_0095_normalised
+night_stand/train/night_stand_0099_normalised
+night_stand/train/night_stand_0110_normalised
+night_stand/train/night_stand_0032_normalised
+night_stand/train/night_stand_0183_normalised
+night_stand/train/night_stand_0117_normalised
+night_stand/train/night_stand_0134_normalised
+night_stand/train/night_stand_0013_normalised
+night_stand/train/night_stand_0145_normalised
+night_stand/train/night_stand_0061_normalised
+night_stand/train/night_stand_0177_normalised
+night_stand/train/night_stand_0189_normalised
+night_stand/train/night_stand_0175_normalised
+night_stand/train/night_stand_0044_normalised
+night_stand/train/night_stand_0004_normalised
+night_stand/train/night_stand_0072_normalised
+night_stand/train/night_stand_0015_normalised
+night_stand/train/night_stand_0098_normalised
+night_stand/train/night_stand_0086_normalised
+night_stand/train/night_stand_0042_normalised
+night_stand/train/night_stand_0041_normalised
+night_stand/train/night_stand_0146_normalised
+night_stand/train/night_stand_0006_normalised
+night_stand/train/night_stand_0008_normalised
+night_stand/train/night_stand_0046_normalised
+night_stand/train/night_stand_0023_normalised
+night_stand/train/night_stand_0172_normalised
+night_stand/train/night_stand_0026_normalised
+night_stand/train/night_stand_0007_normalised
+night_stand/train/night_stand_0150_normalised
+night_stand/train/night_stand_0100_normalised
+night_stand/train/night_stand_0194_normalised
+night_stand/train/night_stand_0155_normalised
+night_stand/train/night_stand_0076_normalised
+night_stand/train/night_stand_0154_normalised
+night_stand/train/night_stand_0143_normalised
+night_stand/train/night_stand_0149_normalised
+night_stand/train/night_stand_0003_normalised
+night_stand/train/night_stand_0055_normalised
+night_stand/train/night_stand_0137_normalised
+night_stand/train/night_stand_0171_normalised
+night_stand/train/night_stand_0123_normalised
+night_stand/train/night_stand_0002_normalised
+night_stand/train/night_stand_0071_normalised
+night_stand/train/night_stand_0092_normalised
+night_stand/train/night_stand_0011_normalised
+night_stand/train/night_stand_0084_normalised
+night_stand/train/night_stand_0197_normalised
+night_stand/train/night_stand_0118_normalised
+night_stand/train/night_stand_0130_normalised
+night_stand/train/night_stand_0187_normalised
+night_stand/train/night_stand_0186_normalised
+night_stand/train/night_stand_0038_normalised
+night_stand/train/night_stand_0153_normalised
+night_stand/train/night_stand_0120_normalised
+night_stand/train/night_stand_0102_normalised
+night_stand/train/night_stand_0126_normalised
+night_stand/train/night_stand_0020_normalised
+night_stand/train/night_stand_0115_normalised
+night_stand/train/night_stand_0090_normalised
+night_stand/train/night_stand_0162_normalised
+night_stand/train/night_stand_0089_normalised
+night_stand/train/night_stand_0063_normalised
+night_stand/train/night_stand_0082_normalised
+night_stand/train/night_stand_0027_normalised
+night_stand/train/night_stand_0124_normalised
+night_stand/train/night_stand_0018_normalised
+night_stand/train/night_stand_0059_normalised
+night_stand/train/night_stand_0048_normalised
+night_stand/train/night_stand_0159_normalised
+night_stand/train/night_stand_0093_normalised
+night_stand/train/night_stand_0019_normalised
+night_stand/train/night_stand_0022_normalised
+night_stand/train/night_stand_0104_normalised
+night_stand/train/night_stand_0085_normalised
+night_stand/train/night_stand_0035_normalised
+night_stand/train/night_stand_0168_normalised
+night_stand/train/night_stand_0111_normalised
+night_stand/train/night_stand_0128_normalised
+night_stand/train/night_stand_0049_normalised
+night_stand/train/night_stand_0152_normalised
+night_stand/train/night_stand_0079_normalised
+night_stand/train/night_stand_0057_normalised
+night_stand/train/night_stand_0005_normalised
+night_stand/train/night_stand_0198_normalised
+night_stand/train/night_stand_0017_normalised
+night_stand/train/night_stand_0188_normalised
+night_stand/train/night_stand_0101_normalised
+night_stand/train/night_stand_0068_normalised
+night_stand/train/night_stand_0160_normalised
+night_stand/train/night_stand_0073_normalised
+night_stand/train/night_stand_0029_normalised
+night_stand/train/night_stand_0052_normalised
+night_stand/train/night_stand_0108_normalised
+night_stand/train/night_stand_0151_normalised
+night_stand/train/night_stand_0028_normalised
+night_stand/train/night_stand_0121_normalised
+night_stand/train/night_stand_0136_normalised
+night_stand/train/night_stand_0107_normalised
+night_stand/train/night_stand_0058_normalised
+night_stand/train/night_stand_0097_normalised
+night_stand/train/night_stand_0165_normalised
+night_stand/train/night_stand_0167_normalised
+night_stand/train/night_stand_0060_normalised
+night_stand/train/night_stand_0050_normalised
+night_stand/train/night_stand_0181_normalised
+night_stand/train/night_stand_0012_normalised
+night_stand/train/night_stand_0056_normalised
+night_stand/train/night_stand_0087_normalised
+night_stand/train/night_stand_0192_normalised
+night_stand/train/night_stand_0105_normalised
+night_stand/train/night_stand_0034_normalised
+night_stand/train/night_stand_0156_normalised
+night_stand/train/night_stand_0021_normalised
+night_stand/train/night_stand_0040_normalised
+night_stand/train/night_stand_0081_normalised
+night_stand/train/night_stand_0064_normalised
+night_stand/train/night_stand_0031_normalised
+night_stand/train/night_stand_0088_normalised
+night_stand/train/night_stand_0190_normalised
+night_stand/train/night_stand_0033_normalised
+night_stand/train/night_stand_0199_normalised
+night_stand/train/night_stand_0070_normalised
+night_stand/train/night_stand_0080_normalised
+night_stand/train/night_stand_0122_normalised
+night_stand/train/night_stand_0135_normalised
+night_stand/train/night_stand_0078_normalised
+night_stand/train/night_stand_0009_normalised
+night_stand/train/night_stand_0182_normalised
+night_stand/train/night_stand_0147_normalised
+night_stand/train/night_stand_0184_normalised
+night_stand/train/night_stand_0083_normalised
+night_stand/train/night_stand_0170_normalised
+night_stand/train/night_stand_0094_normalised
+night_stand/train/night_stand_0173_normalised
+night_stand/train/night_stand_0054_normalised
+night_stand/train/night_stand_0045_normalised
+night_stand/train/night_stand_0036_normalised
+night_stand/train/night_stand_0075_normalised
+night_stand/train/night_stand_0138_normalised
+night_stand/train/night_stand_0142_normalised
+night_stand/train/night_stand_0024_normalised
+night_stand/train/night_stand_0164_normalised
+night_stand/train/night_stand_0133_normalised
+night_stand/train/night_stand_0010_normalised
+night_stand/train/night_stand_0132_normalised
+night_stand/train/night_stand_0140_normalised
+night_stand/train/night_stand_0161_normalised
+night_stand/train/night_stand_0109_normalised
+night_stand/train/night_stand_0196_normalised
+night_stand/train/night_stand_0166_normalised
+night_stand/train/night_stand_0116_normalised
+night_stand/train/night_stand_0174_normalised
+night_stand/train/night_stand_0193_normalised
+night_stand/train/night_stand_0169_normalised
+night_stand/train/night_stand_0043_normalised
+night_stand/train/night_stand_0176_normalised
+night_stand/train/night_stand_0127_normalised
+night_stand/train/night_stand_0062_normalised
+night_stand/train/night_stand_0074_normalised
+night_stand/train/night_stand_0039_normalised
+night_stand/train/night_stand_0103_normalised
+night_stand/train/night_stand_0016_normalised
+night_stand/train/night_stand_0112_normalised
+night_stand/train/night_stand_0053_normalised
+night_stand/train/night_stand_0077_normalised
+night_stand/train/night_stand_0179_normalised
+night_stand/train/night_stand_0051_normalised
+night_stand/train/night_stand_0191_normalised
+night_stand/train/night_stand_0047_normalised
+night_stand/train/night_stand_0066_normalised
+night_stand/train/night_stand_0014_normalised
+night_stand/train/night_stand_0157_normalised
+night_stand/train/night_stand_0001_normalised
+night_stand/train/night_stand_0178_normalised
+night_stand/train/night_stand_0131_normalised
+night_stand/train/night_stand_0200_normalised
+night_stand/train/night_stand_0185_normalised
+night_stand/train/night_stand_0158_normalised
+night_stand/train/night_stand_0030_normalised
+night_stand/train/night_stand_0106_normalised
+night_stand/train/night_stand_0148_normalised
+night_stand/train/night_stand_0037_normalised
+night_stand/train/night_stand_0113_normalised
+night_stand/train/night_stand_0025_normalised
+night_stand/train/night_stand_0114_normalised
+night_stand/train/night_stand_0139_normalised
+night_stand/train/night_stand_0125_normalised
+night_stand/train/night_stand_0180_normalised
+night_stand/train/night_stand_0141_normalised
+night_stand/train/night_stand_0144_normalised
+night_stand/train/night_stand_0195_normalised
+night_stand/test/night_stand_0280_normalised
+night_stand/test/night_stand_0263_normalised
+night_stand/test/night_stand_0256_normalised
+night_stand/test/night_stand_0262_normalised
+night_stand/test/night_stand_0233_normalised
+night_stand/test/night_stand_0253_normalised
+night_stand/test/night_stand_0279_normalised
+night_stand/test/night_stand_0207_normalised
+night_stand/test/night_stand_0273_normalised
+night_stand/test/night_stand_0281_normalised
+night_stand/test/night_stand_0252_normalised
+night_stand/test/night_stand_0250_normalised
+night_stand/test/night_stand_0278_normalised
+night_stand/test/night_stand_0255_normalised
+night_stand/test/night_stand_0204_normalised
+night_stand/test/night_stand_0216_normalised
+night_stand/test/night_stand_0221_normalised
+night_stand/test/night_stand_0224_normalised
+night_stand/test/night_stand_0213_normalised
+night_stand/test/night_stand_0286_normalised
+night_stand/test/night_stand_0229_normalised
+night_stand/test/night_stand_0236_normalised
+night_stand/test/night_stand_0235_normalised
+night_stand/test/night_stand_0220_normalised
+night_stand/test/night_stand_0265_normalised
+night_stand/test/night_stand_0227_normalised
+night_stand/test/night_stand_0259_normalised
+night_stand/test/night_stand_0277_normalised
+night_stand/test/night_stand_0247_normalised
+night_stand/test/night_stand_0222_normalised
+night_stand/test/night_stand_0212_normalised
+night_stand/test/night_stand_0230_normalised
+night_stand/test/night_stand_0269_normalised
+night_stand/test/night_stand_0243_normalised
+night_stand/test/night_stand_0272_normalised
+night_stand/test/night_stand_0257_normalised
+night_stand/test/night_stand_0223_normalised
+night_stand/test/night_stand_0232_normalised
+night_stand/test/night_stand_0206_normalised
+night_stand/test/night_stand_0238_normalised
+night_stand/test/night_stand_0264_normalised
+night_stand/test/night_stand_0249_normalised
+night_stand/test/night_stand_0202_normalised
+night_stand/test/night_stand_0251_normalised
+night_stand/test/night_stand_0248_normalised
+night_stand/test/night_stand_0239_normalised
+night_stand/test/night_stand_0268_normalised
+night_stand/test/night_stand_0246_normalised
+night_stand/test/night_stand_0258_normalised
+night_stand/test/night_stand_0210_normalised
+night_stand/test/night_stand_0219_normalised
+night_stand/test/night_stand_0231_normalised
+night_stand/test/night_stand_0242_normalised
+night_stand/test/night_stand_0245_normalised
+night_stand/test/night_stand_0214_normalised
+night_stand/test/night_stand_0205_normalised
+night_stand/test/night_stand_0201_normalised
+night_stand/test/night_stand_0203_normalised
+night_stand/test/night_stand_0228_normalised
+night_stand/test/night_stand_0208_normalised
+night_stand/test/night_stand_0282_normalised
+night_stand/test/night_stand_0260_normalised
+night_stand/test/night_stand_0226_normalised
+night_stand/test/night_stand_0237_normalised
+night_stand/test/night_stand_0240_normalised
+night_stand/test/night_stand_0215_normalised
+night_stand/test/night_stand_0211_normalised
+night_stand/test/night_stand_0254_normalised
+night_stand/test/night_stand_0276_normalised
+night_stand/test/night_stand_0261_normalised
+night_stand/test/night_stand_0270_normalised
+night_stand/test/night_stand_0275_normalised
+night_stand/test/night_stand_0271_normalised
+night_stand/test/night_stand_0225_normalised
+night_stand/test/night_stand_0267_normalised
+night_stand/test/night_stand_0285_normalised
+night_stand/test/night_stand_0209_normalised
+night_stand/test/night_stand_0217_normalised
+night_stand/test/night_stand_0266_normalised
+night_stand/test/night_stand_0218_normalised
+night_stand/test/night_stand_0284_normalised
+night_stand/test/night_stand_0241_normalised
+night_stand/test/night_stand_0244_normalised
+night_stand/test/night_stand_0283_normalised
+night_stand/test/night_stand_0234_normalised
+night_stand/test/night_stand_0274_normalised
+tent/train/tent_0111_normalised
+tent/train/tent_0109_normalised
+tent/train/tent_0132_normalised
+tent/train/tent_0007_normalised
+tent/train/tent_0021_normalised
+tent/train/tent_0154_normalised
+tent/train/tent_0094_normalised
+tent/train/tent_0152_normalised
+tent/train/tent_0062_normalised
+tent/train/tent_0155_normalised
+tent/train/tent_0060_normalised
+tent/train/tent_0002_normalised
+tent/train/tent_0045_normalised
+tent/train/tent_0008_normalised
+tent/train/tent_0063_normalised
+tent/train/tent_0070_normalised
+tent/train/tent_0126_normalised
+tent/train/tent_0024_normalised
+tent/train/tent_0099_normalised
+tent/train/tent_0097_normalised
+tent/train/tent_0123_normalised
+tent/train/tent_0009_normalised
+tent/train/tent_0026_normalised
+tent/train/tent_0121_normalised
+tent/train/tent_0144_normalised
+tent/train/tent_0101_normalised
+tent/train/tent_0018_normalised
+tent/train/tent_0054_normalised
+tent/train/tent_0113_normalised
+tent/train/tent_0131_normalised
+tent/train/tent_0075_normalised
+tent/train/tent_0058_normalised
+tent/train/tent_0053_normalised
+tent/train/tent_0052_normalised
+tent/train/tent_0129_normalised
+tent/train/tent_0117_normalised
+tent/train/tent_0057_normalised
+tent/train/tent_0010_normalised
+tent/train/tent_0017_normalised
+tent/train/tent_0064_normalised
+tent/train/tent_0087_normalised
+tent/train/tent_0056_normalised
+tent/train/tent_0134_normalised
+tent/train/tent_0013_normalised
+tent/train/tent_0098_normalised
+tent/train/tent_0015_normalised
+tent/train/tent_0051_normalised
+tent/train/tent_0014_normalised
+tent/train/tent_0140_normalised
+tent/train/tent_0148_normalised
+tent/train/tent_0022_normalised
+tent/train/tent_0102_normalised
+tent/train/tent_0158_normalised
+tent/train/tent_0125_normalised
+tent/train/tent_0030_normalised
+tent/train/tent_0033_normalised
+tent/train/tent_0074_normalised
+tent/train/tent_0083_normalised
+tent/train/tent_0104_normalised
+tent/train/tent_0078_normalised
+tent/train/tent_0037_normalised
+tent/train/tent_0065_normalised
+tent/train/tent_0044_normalised
+tent/train/tent_0150_normalised
+tent/train/tent_0080_normalised
+tent/train/tent_0115_normalised
+tent/train/tent_0141_normalised
+tent/train/tent_0055_normalised
+tent/train/tent_0027_normalised
+tent/train/tent_0029_normalised
+tent/train/tent_0124_normalised
+tent/train/tent_0035_normalised
+tent/train/tent_0163_normalised
+tent/train/tent_0128_normalised
+tent/train/tent_0120_normalised
+tent/train/tent_0042_normalised
+tent/train/tent_0041_normalised
+tent/train/tent_0047_normalised
+tent/train/tent_0116_normalised
+tent/train/tent_0095_normalised
+tent/train/tent_0077_normalised
+tent/train/tent_0133_normalised
+tent/train/tent_0046_normalised
+tent/train/tent_0146_normalised
+tent/train/tent_0032_normalised
+tent/train/tent_0040_normalised
+tent/train/tent_0157_normalised
+tent/train/tent_0036_normalised
+tent/train/tent_0107_normalised
+tent/train/tent_0118_normalised
+tent/train/tent_0138_normalised
+tent/train/tent_0081_normalised
+tent/train/tent_0130_normalised
+tent/train/tent_0023_normalised
+tent/train/tent_0153_normalised
+tent/train/tent_0159_normalised
+tent/train/tent_0137_normalised
+tent/train/tent_0112_normalised
+tent/train/tent_0039_normalised
+tent/train/tent_0088_normalised
+tent/train/tent_0106_normalised
+tent/train/tent_0031_normalised
+tent/train/tent_0143_normalised
+tent/train/tent_0004_normalised
+tent/train/tent_0001_normalised
+tent/train/tent_0136_normalised
+tent/train/tent_0079_normalised
+tent/train/tent_0161_normalised
+tent/train/tent_0005_normalised
+tent/train/tent_0089_normalised
+tent/train/tent_0061_normalised
+tent/train/tent_0149_normalised
+tent/train/tent_0006_normalised
+tent/train/tent_0050_normalised
+tent/train/tent_0084_normalised
+tent/train/tent_0068_normalised
+tent/train/tent_0100_normalised
+tent/train/tent_0043_normalised
+tent/train/tent_0160_normalised
+tent/train/tent_0085_normalised
+tent/train/tent_0139_normalised
+tent/train/tent_0122_normalised
+tent/train/tent_0145_normalised
+tent/train/tent_0020_normalised
+tent/train/tent_0003_normalised
+tent/train/tent_0028_normalised
+tent/train/tent_0082_normalised
+tent/train/tent_0127_normalised
+tent/train/tent_0067_normalised
+tent/train/tent_0162_normalised
+tent/train/tent_0066_normalised
+tent/train/tent_0049_normalised
+tent/train/tent_0090_normalised
+tent/train/tent_0072_normalised
+tent/train/tent_0091_normalised
+tent/train/tent_0119_normalised
+tent/train/tent_0073_normalised
+tent/train/tent_0048_normalised
+tent/train/tent_0147_normalised
+tent/train/tent_0096_normalised
+tent/train/tent_0038_normalised
+tent/train/tent_0025_normalised
+tent/train/tent_0108_normalised
+tent/train/tent_0019_normalised
+tent/train/tent_0076_normalised
+tent/train/tent_0092_normalised
+tent/train/tent_0093_normalised
+tent/train/tent_0069_normalised
+tent/train/tent_0016_normalised
+tent/train/tent_0034_normalised
+tent/train/tent_0012_normalised
+tent/train/tent_0135_normalised
+tent/train/tent_0151_normalised
+tent/train/tent_0110_normalised
+tent/train/tent_0105_normalised
+tent/train/tent_0071_normalised
+tent/train/tent_0156_normalised
+tent/train/tent_0142_normalised
+tent/train/tent_0086_normalised
+tent/train/tent_0114_normalised
+tent/train/tent_0103_normalised
+tent/train/tent_0059_normalised
+tent/train/tent_0011_normalised
+tent/test/tent_0169_normalised
+tent/test/tent_0166_normalised
+tent/test/tent_0182_normalised
+tent/test/tent_0177_normalised
+tent/test/tent_0178_normalised
+tent/test/tent_0165_normalised
+tent/test/tent_0183_normalised
+tent/test/tent_0179_normalised
+tent/test/tent_0170_normalised
+tent/test/tent_0168_normalised
+tent/test/tent_0173_normalised
+tent/test/tent_0181_normalised
+tent/test/tent_0171_normalised
+tent/test/tent_0174_normalised
+tent/test/tent_0175_normalised
+tent/test/tent_0164_normalised
+tent/test/tent_0172_normalised
+tent/test/tent_0167_normalised
+tent/test/tent_0180_normalised
+tent/test/tent_0176_normalised
+bookshelf/train/bookshelf_0446_normalised
+bookshelf/train/bookshelf_0072_normalised
+bookshelf/train/bookshelf_0241_normalised
+bookshelf/train/bookshelf_0300_normalised
+bookshelf/train/bookshelf_0341_normalised
+bookshelf/train/bookshelf_0209_normalised
+bookshelf/train/bookshelf_0045_normalised
+bookshelf/train/bookshelf_0425_normalised
+bookshelf/train/bookshelf_0009_normalised
+bookshelf/train/bookshelf_0263_normalised
+bookshelf/train/bookshelf_0567_normalised
+bookshelf/train/bookshelf_0489_normalised
+bookshelf/train/bookshelf_0444_normalised
+bookshelf/train/bookshelf_0462_normalised
+bookshelf/train/bookshelf_0554_normalised
+bookshelf/train/bookshelf_0505_normalised
+bookshelf/train/bookshelf_0560_normalised
+bookshelf/train/bookshelf_0283_normalised
+bookshelf/train/bookshelf_0561_normalised
+bookshelf/train/bookshelf_0512_normalised
+bookshelf/train/bookshelf_0212_normalised
+bookshelf/train/bookshelf_0474_normalised
+bookshelf/train/bookshelf_0543_normalised
+bookshelf/train/bookshelf_0163_normalised
+bookshelf/train/bookshelf_0360_normalised
+bookshelf/train/bookshelf_0104_normalised
+bookshelf/train/bookshelf_0049_normalised
+bookshelf/train/bookshelf_0493_normalised
+bookshelf/train/bookshelf_0021_normalised
+bookshelf/train/bookshelf_0368_normalised
+bookshelf/train/bookshelf_0207_normalised
+bookshelf/train/bookshelf_0061_normalised
+bookshelf/train/bookshelf_0020_normalised
+bookshelf/train/bookshelf_0524_normalised
+bookshelf/train/bookshelf_0168_normalised
+bookshelf/train/bookshelf_0496_normalised
+bookshelf/train/bookshelf_0396_normalised
+bookshelf/train/bookshelf_0266_normalised
+bookshelf/train/bookshelf_0059_normalised
+bookshelf/train/bookshelf_0087_normalised
+bookshelf/train/bookshelf_0467_normalised
+bookshelf/train/bookshelf_0274_normalised
+bookshelf/train/bookshelf_0264_normalised
+bookshelf/train/bookshelf_0335_normalised
+bookshelf/train/bookshelf_0528_normalised
+bookshelf/train/bookshelf_0485_normalised
+bookshelf/train/bookshelf_0055_normalised
+bookshelf/train/bookshelf_0550_normalised
+bookshelf/train/bookshelf_0453_normalised
+bookshelf/train/bookshelf_0201_normalised
+bookshelf/train/bookshelf_0269_normalised
+bookshelf/train/bookshelf_0482_normalised
+bookshelf/train/bookshelf_0112_normalised
+bookshelf/train/bookshelf_0243_normalised
+bookshelf/train/bookshelf_0292_normalised
+bookshelf/train/bookshelf_0423_normalised
+bookshelf/train/bookshelf_0242_normalised
+bookshelf/train/bookshelf_0082_normalised
+bookshelf/train/bookshelf_0458_normalised
+bookshelf/train/bookshelf_0101_normalised
+bookshelf/train/bookshelf_0476_normalised
+bookshelf/train/bookshelf_0333_normalised
+bookshelf/train/bookshelf_0365_normalised
+bookshelf/train/bookshelf_0096_normalised
+bookshelf/train/bookshelf_0253_normalised
+bookshelf/train/bookshelf_0265_normalised
+bookshelf/train/bookshelf_0286_normalised
+bookshelf/train/bookshelf_0134_normalised
+bookshelf/train/bookshelf_0487_normalised
+bookshelf/train/bookshelf_0234_normalised
+bookshelf/train/bookshelf_0390_normalised
+bookshelf/train/bookshelf_0302_normalised
+bookshelf/train/bookshelf_0172_normalised
+bookshelf/train/bookshelf_0098_normalised
+bookshelf/train/bookshelf_0138_normalised
+bookshelf/train/bookshelf_0053_normalised
+bookshelf/train/bookshelf_0221_normalised
+bookshelf/train/bookshelf_0136_normalised
+bookshelf/train/bookshelf_0141_normalised
+bookshelf/train/bookshelf_0073_normalised
+bookshelf/train/bookshelf_0229_normalised
+bookshelf/train/bookshelf_0030_normalised
+bookshelf/train/bookshelf_0132_normalised
+bookshelf/train/bookshelf_0314_normalised
+bookshelf/train/bookshelf_0247_normalised
+bookshelf/train/bookshelf_0256_normalised
+bookshelf/train/bookshelf_0455_normalised
+bookshelf/train/bookshelf_0413_normalised
+bookshelf/train/bookshelf_0491_normalised
+bookshelf/train/bookshelf_0410_normalised
+bookshelf/train/bookshelf_0133_normalised
+bookshelf/train/bookshelf_0532_normalised
+bookshelf/train/bookshelf_0025_normalised
+bookshelf/train/bookshelf_0051_normalised
+bookshelf/train/bookshelf_0004_normalised
+bookshelf/train/bookshelf_0116_normalised
+bookshelf/train/bookshelf_0279_normalised
+bookshelf/train/bookshelf_0366_normalised
+bookshelf/train/bookshelf_0220_normalised
+bookshelf/train/bookshelf_0572_normalised
+bookshelf/train/bookshelf_0161_normalised
+bookshelf/train/bookshelf_0432_normalised
+bookshelf/train/bookshelf_0420_normalised
+bookshelf/train/bookshelf_0215_normalised
+bookshelf/train/bookshelf_0094_normalised
+bookshelf/train/bookshelf_0529_normalised
+bookshelf/train/bookshelf_0507_normalised
+bookshelf/train/bookshelf_0131_normalised
+bookshelf/train/bookshelf_0541_normalised
+bookshelf/train/bookshelf_0454_normalised
+bookshelf/train/bookshelf_0478_normalised
+bookshelf/train/bookshelf_0411_normalised
+bookshelf/train/bookshelf_0427_normalised
+bookshelf/train/bookshelf_0565_normalised
+bookshelf/train/bookshelf_0005_normalised
+bookshelf/train/bookshelf_0296_normalised
+bookshelf/train/bookshelf_0277_normalised
+bookshelf/train/bookshelf_0237_normalised
+bookshelf/train/bookshelf_0437_normalised
+bookshelf/train/bookshelf_0210_normalised
+bookshelf/train/bookshelf_0349_normalised
+bookshelf/train/bookshelf_0352_normalised
+bookshelf/train/bookshelf_0504_normalised
+bookshelf/train/bookshelf_0515_normalised
+bookshelf/train/bookshelf_0378_normalised
+bookshelf/train/bookshelf_0447_normalised
+bookshelf/train/bookshelf_0003_normalised
+bookshelf/train/bookshelf_0522_normalised
+bookshelf/train/bookshelf_0475_normalised
+bookshelf/train/bookshelf_0316_normalised
+bookshelf/train/bookshelf_0170_normalised
+bookshelf/train/bookshelf_0223_normalised
+bookshelf/train/bookshelf_0367_normalised
+bookshelf/train/bookshelf_0436_normalised
+bookshelf/train/bookshelf_0081_normalised
+bookshelf/train/bookshelf_0569_normalised
+bookshelf/train/bookshelf_0236_normalised
+bookshelf/train/bookshelf_0033_normalised
+bookshelf/train/bookshelf_0488_normalised
+bookshelf/train/bookshelf_0211_normalised
+bookshelf/train/bookshelf_0176_normalised
+bookshelf/train/bookshelf_0304_normalised
+bookshelf/train/bookshelf_0115_normalised
+bookshelf/train/bookshelf_0065_normalised
+bookshelf/train/bookshelf_0208_normalised
+bookshelf/train/bookshelf_0329_normalised
+bookshelf/train/bookshelf_0042_normalised
+bookshelf/train/bookshelf_0240_normalised
+bookshelf/train/bookshelf_0318_normalised
+bookshelf/train/bookshelf_0060_normalised
+bookshelf/train/bookshelf_0439_normalised
+bookshelf/train/bookshelf_0026_normalised
+bookshelf/train/bookshelf_0175_normalised
+bookshelf/train/bookshelf_0158_normalised
+bookshelf/train/bookshelf_0521_normalised
+bookshelf/train/bookshelf_0202_normalised
+bookshelf/train/bookshelf_0250_normalised
+bookshelf/train/bookshelf_0492_normalised
+bookshelf/train/bookshelf_0315_normalised
+bookshelf/train/bookshelf_0469_normalised
+bookshelf/train/bookshelf_0200_normalised
+bookshelf/train/bookshelf_0525_normalised
+bookshelf/train/bookshelf_0232_normalised
+bookshelf/train/bookshelf_0058_normalised
+bookshelf/train/bookshelf_0151_normalised
+bookshelf/train/bookshelf_0537_normalised
+bookshelf/train/bookshelf_0443_normalised
+bookshelf/train/bookshelf_0120_normalised
+bookshelf/train/bookshelf_0260_normalised
+bookshelf/train/bookshelf_0520_normalised
+bookshelf/train/bookshelf_0433_normalised
+bookshelf/train/bookshelf_0480_normalised
+bookshelf/train/bookshelf_0412_normalised
+bookshelf/train/bookshelf_0459_normalised
+bookshelf/train/bookshelf_0281_normalised
+bookshelf/train/bookshelf_0108_normalised
+bookshelf/train/bookshelf_0249_normalised
+bookshelf/train/bookshelf_0010_normalised
+bookshelf/train/bookshelf_0177_normalised
+bookshelf/train/bookshelf_0409_normalised
+bookshelf/train/bookshelf_0006_normalised
+bookshelf/train/bookshelf_0092_normalised
+bookshelf/train/bookshelf_0320_normalised
+bookshelf/train/bookshelf_0146_normalised
+bookshelf/train/bookshelf_0203_normalised
+bookshelf/train/bookshelf_0192_normalised
+bookshelf/train/bookshelf_0022_normalised
+bookshelf/train/bookshelf_0039_normalised
+bookshelf/train/bookshelf_0019_normalised
+bookshelf/train/bookshelf_0287_normalised
+bookshelf/train/bookshelf_0080_normalised
+bookshelf/train/bookshelf_0519_normalised
+bookshelf/train/bookshelf_0150_normalised
+bookshelf/train/bookshelf_0500_normalised
+bookshelf/train/bookshelf_0222_normalised
+bookshelf/train/bookshelf_0416_normalised
+bookshelf/train/bookshelf_0165_normalised
+bookshelf/train/bookshelf_0514_normalised
+bookshelf/train/bookshelf_0012_normalised
+bookshelf/train/bookshelf_0307_normalised
+bookshelf/train/bookshelf_0063_normalised
+bookshelf/train/bookshelf_0355_normalised
+bookshelf/train/bookshelf_0350_normalised
+bookshelf/train/bookshelf_0181_normalised
+bookshelf/train/bookshelf_0156_normalised
+bookshelf/train/bookshelf_0245_normalised
+bookshelf/train/bookshelf_0028_normalised
+bookshelf/train/bookshelf_0278_normalised
+bookshelf/train/bookshelf_0336_normalised
+bookshelf/train/bookshelf_0509_normalised
+bookshelf/train/bookshelf_0374_normalised
+bookshelf/train/bookshelf_0531_normalised
+bookshelf/train/bookshelf_0547_normalised
+bookshelf/train/bookshelf_0331_normalised
+bookshelf/train/bookshelf_0017_normalised
+bookshelf/train/bookshelf_0357_normalised
+bookshelf/train/bookshelf_0312_normalised
+bookshelf/train/bookshelf_0205_normalised
+bookshelf/train/bookshelf_0516_normalised
+bookshelf/train/bookshelf_0145_normalised
+bookshelf/train/bookshelf_0075_normalised
+bookshelf/train/bookshelf_0308_normalised
+bookshelf/train/bookshelf_0546_normalised
+bookshelf/train/bookshelf_0299_normalised
+bookshelf/train/bookshelf_0503_normalised
+bookshelf/train/bookshelf_0470_normalised
+bookshelf/train/bookshelf_0456_normalised
+bookshelf/train/bookshelf_0190_normalised
+bookshelf/train/bookshelf_0381_normalised
+bookshelf/train/bookshelf_0291_normalised
+bookshelf/train/bookshelf_0479_normalised
+bookshelf/train/bookshelf_0068_normalised
+bookshelf/train/bookshelf_0421_normalised
+bookshelf/train/bookshelf_0323_normalised
+bookshelf/train/bookshelf_0517_normalised
+bookshelf/train/bookshelf_0035_normalised
+bookshelf/train/bookshelf_0139_normalised
+bookshelf/train/bookshelf_0113_normalised
+bookshelf/train/bookshelf_0347_normalised
+bookshelf/train/bookshelf_0261_normalised
+bookshelf/train/bookshelf_0235_normalised
+bookshelf/train/bookshelf_0346_normalised
+bookshelf/train/bookshelf_0549_normalised
+bookshelf/train/bookshelf_0442_normalised
+bookshelf/train/bookshelf_0557_normalised
+bookshelf/train/bookshelf_0555_normalised
+bookshelf/train/bookshelf_0385_normalised
+bookshelf/train/bookshelf_0067_normalised
+bookshelf/train/bookshelf_0380_normalised
+bookshelf/train/bookshelf_0166_normalised
+bookshelf/train/bookshelf_0252_normalised
+bookshelf/train/bookshelf_0193_normalised
+bookshelf/train/bookshelf_0334_normalised
+bookshelf/train/bookshelf_0226_normalised
+bookshelf/train/bookshelf_0169_normalised
+bookshelf/train/bookshelf_0371_normalised
+bookshelf/train/bookshelf_0384_normalised
+bookshelf/train/bookshelf_0182_normalised
+bookshelf/train/bookshelf_0415_normalised
+bookshelf/train/bookshelf_0428_normalised
+bookshelf/train/bookshelf_0216_normalised
+bookshelf/train/bookshelf_0123_normalised
+bookshelf/train/bookshelf_0159_normalised
+bookshelf/train/bookshelf_0481_normalised
+bookshelf/train/bookshelf_0194_normalised
+bookshelf/train/bookshelf_0535_normalised
+bookshelf/train/bookshelf_0257_normalised
+bookshelf/train/bookshelf_0276_normalised
+bookshelf/train/bookshelf_0394_normalised
+bookshelf/train/bookshelf_0348_normalised
+bookshelf/train/bookshelf_0391_normalised
+bookshelf/train/bookshelf_0506_normalised
+bookshelf/train/bookshelf_0056_normalised
+bookshelf/train/bookshelf_0140_normalised
+bookshelf/train/bookshelf_0050_normalised
+bookshelf/train/bookshelf_0363_normalised
+bookshelf/train/bookshelf_0126_normalised
+bookshelf/train/bookshelf_0027_normalised
+bookshelf/train/bookshelf_0107_normalised
+bookshelf/train/bookshelf_0127_normalised
+bookshelf/train/bookshelf_0461_normalised
+bookshelf/train/bookshelf_0536_normalised
+bookshelf/train/bookshelf_0219_normalised
+bookshelf/train/bookshelf_0187_normalised
+bookshelf/train/bookshelf_0301_normalised
+bookshelf/train/bookshelf_0038_normalised
+bookshelf/train/bookshelf_0183_normalised
+bookshelf/train/bookshelf_0457_normalised
+bookshelf/train/bookshelf_0111_normalised
+bookshelf/train/bookshelf_0217_normalised
+bookshelf/train/bookshelf_0280_normalised
+bookshelf/train/bookshelf_0085_normalised
+bookshelf/train/bookshelf_0001_normalised
+bookshelf/train/bookshelf_0343_normalised
+bookshelf/train/bookshelf_0290_normalised
+bookshelf/train/bookshelf_0157_normalised
+bookshelf/train/bookshelf_0339_normalised
+bookshelf/train/bookshelf_0508_normalised
+bookshelf/train/bookshelf_0062_normalised
+bookshelf/train/bookshelf_0018_normalised
+bookshelf/train/bookshelf_0321_normalised
+bookshelf/train/bookshelf_0389_normalised
+bookshelf/train/bookshelf_0426_normalised
+bookshelf/train/bookshelf_0070_normalised
+bookshelf/train/bookshelf_0289_normalised
+bookshelf/train/bookshelf_0074_normalised
+bookshelf/train/bookshelf_0400_normalised
+bookshelf/train/bookshelf_0069_normalised
+bookshelf/train/bookshelf_0117_normalised
+bookshelf/train/bookshelf_0089_normalised
+bookshelf/train/bookshelf_0495_normalised
+bookshelf/train/bookshelf_0399_normalised
+bookshelf/train/bookshelf_0306_normalised
+bookshelf/train/bookshelf_0254_normalised
+bookshelf/train/bookshelf_0570_normalised
+bookshelf/train/bookshelf_0137_normalised
+bookshelf/train/bookshelf_0023_normalised
+bookshelf/train/bookshelf_0356_normalised
+bookshelf/train/bookshelf_0171_normalised
+bookshelf/train/bookshelf_0548_normalised
+bookshelf/train/bookshelf_0066_normalised
+bookshelf/train/bookshelf_0084_normalised
+bookshelf/train/bookshelf_0558_normalised
+bookshelf/train/bookshelf_0393_normalised
+bookshelf/train/bookshelf_0523_normalised
+bookshelf/train/bookshelf_0272_normalised
+bookshelf/train/bookshelf_0011_normalised
+bookshelf/train/bookshelf_0419_normalised
+bookshelf/train/bookshelf_0077_normalised
+bookshelf/train/bookshelf_0147_normalised
+bookshelf/train/bookshelf_0527_normalised
+bookshelf/train/bookshelf_0408_normalised
+bookshelf/train/bookshelf_0486_normalised
+bookshelf/train/bookshelf_0319_normalised
+bookshelf/train/bookshelf_0228_normalised
+bookshelf/train/bookshelf_0303_normalised
+bookshelf/train/bookshelf_0484_normalised
+bookshelf/train/bookshelf_0556_normalised
+bookshelf/train/bookshelf_0358_normalised
+bookshelf/train/bookshelf_0501_normalised
+bookshelf/train/bookshelf_0465_normalised
+bookshelf/train/bookshelf_0297_normalised
+bookshelf/train/bookshelf_0040_normalised
+bookshelf/train/bookshelf_0354_normalised
+bookshelf/train/bookshelf_0559_normalised
+bookshelf/train/bookshelf_0013_normalised
+bookshelf/train/bookshelf_0430_normalised
+bookshelf/train/bookshelf_0148_normalised
+bookshelf/train/bookshelf_0054_normalised
+bookshelf/train/bookshelf_0293_normalised
+bookshelf/train/bookshelf_0414_normalised
+bookshelf/train/bookshelf_0441_normalised
+bookshelf/train/bookshelf_0083_normalised
+bookshelf/train/bookshelf_0392_normalised
+bookshelf/train/bookshelf_0324_normalised
+bookshelf/train/bookshelf_0328_normalised
+bookshelf/train/bookshelf_0483_normalised
+bookshelf/train/bookshelf_0539_normalised
+bookshelf/train/bookshelf_0093_normalised
+bookshelf/train/bookshelf_0449_normalised
+bookshelf/train/bookshelf_0552_normalised
+bookshelf/train/bookshelf_0450_normalised
+bookshelf/train/bookshelf_0032_normalised
+bookshelf/train/bookshelf_0251_normalised
+bookshelf/train/bookshelf_0553_normalised
+bookshelf/train/bookshelf_0499_normalised
+bookshelf/train/bookshelf_0227_normalised
+bookshelf/train/bookshelf_0340_normalised
+bookshelf/train/bookshelf_0233_normalised
+bookshelf/train/bookshelf_0370_normalised
+bookshelf/train/bookshelf_0511_normalised
+bookshelf/train/bookshelf_0402_normalised
+bookshelf/train/bookshelf_0518_normalised
+bookshelf/train/bookshelf_0199_normalised
+bookshelf/train/bookshelf_0332_normalised
+bookshelf/train/bookshelf_0317_normalised
+bookshelf/train/bookshelf_0007_normalised
+bookshelf/train/bookshelf_0102_normalised
+bookshelf/train/bookshelf_0196_normalised
+bookshelf/train/bookshelf_0387_normalised
+bookshelf/train/bookshelf_0173_normalised
+bookshelf/train/bookshelf_0064_normalised
+bookshelf/train/bookshelf_0472_normalised
+bookshelf/train/bookshelf_0397_normalised
+bookshelf/train/bookshelf_0105_normalised
+bookshelf/train/bookshelf_0036_normalised
+bookshelf/train/bookshelf_0121_normalised
+bookshelf/train/bookshelf_0154_normalised
+bookshelf/train/bookshelf_0502_normalised
+bookshelf/train/bookshelf_0031_normalised
+bookshelf/train/bookshelf_0188_normalised
+bookshelf/train/bookshelf_0533_normalised
+bookshelf/train/bookshelf_0473_normalised
+bookshelf/train/bookshelf_0498_normalised
+bookshelf/train/bookshelf_0401_normalised
+bookshelf/train/bookshelf_0534_normalised
+bookshelf/train/bookshelf_0195_normalised
+bookshelf/train/bookshelf_0398_normalised
+bookshelf/train/bookshelf_0110_normalised
+bookshelf/train/bookshelf_0130_normalised
+bookshelf/train/bookshelf_0015_normalised
+bookshelf/train/bookshelf_0268_normalised
+bookshelf/train/bookshelf_0191_normalised
+bookshelf/train/bookshelf_0135_normalised
+bookshelf/train/bookshelf_0149_normalised
+bookshelf/train/bookshelf_0305_normalised
+bookshelf/train/bookshelf_0353_normalised
+bookshelf/train/bookshelf_0142_normalised
+bookshelf/train/bookshelf_0174_normalised
+bookshelf/train/bookshelf_0383_normalised
+bookshelf/train/bookshelf_0034_normalised
+bookshelf/train/bookshelf_0345_normalised
+bookshelf/train/bookshelf_0424_normalised
+bookshelf/train/bookshelf_0422_normalised
+bookshelf/train/bookshelf_0282_normalised
+bookshelf/train/bookshelf_0271_normalised
+bookshelf/train/bookshelf_0494_normalised
+bookshelf/train/bookshelf_0373_normalised
+bookshelf/train/bookshelf_0184_normalised
+bookshelf/train/bookshelf_0218_normalised
+bookshelf/train/bookshelf_0571_normalised
+bookshelf/train/bookshelf_0125_normalised
+bookshelf/train/bookshelf_0285_normalised
+bookshelf/train/bookshelf_0122_normalised
+bookshelf/train/bookshelf_0545_normalised
+bookshelf/train/bookshelf_0267_normalised
+bookshelf/train/bookshelf_0406_normalised
+bookshelf/train/bookshelf_0008_normalised
+bookshelf/train/bookshelf_0259_normalised
+bookshelf/train/bookshelf_0189_normalised
+bookshelf/train/bookshelf_0284_normalised
+bookshelf/train/bookshelf_0311_normalised
+bookshelf/train/bookshelf_0014_normalised
+bookshelf/train/bookshelf_0162_normalised
+bookshelf/train/bookshelf_0468_normalised
+bookshelf/train/bookshelf_0542_normalised
+bookshelf/train/bookshelf_0448_normalised
+bookshelf/train/bookshelf_0497_normalised
+bookshelf/train/bookshelf_0326_normalised
+bookshelf/train/bookshelf_0099_normalised
+bookshelf/train/bookshelf_0566_normalised
+bookshelf/train/bookshelf_0288_normalised
+bookshelf/train/bookshelf_0100_normalised
+bookshelf/train/bookshelf_0016_normalised
+bookshelf/train/bookshelf_0563_normalised
+bookshelf/train/bookshelf_0167_normalised
+bookshelf/train/bookshelf_0185_normalised
+bookshelf/train/bookshelf_0452_normalised
+bookshelf/train/bookshelf_0361_normalised
+bookshelf/train/bookshelf_0344_normalised
+bookshelf/train/bookshelf_0225_normalised
+bookshelf/train/bookshelf_0197_normalised
+bookshelf/train/bookshelf_0568_normalised
+bookshelf/train/bookshelf_0434_normalised
+bookshelf/train/bookshelf_0178_normalised
+bookshelf/train/bookshelf_0337_normalised
+bookshelf/train/bookshelf_0403_normalised
+bookshelf/train/bookshelf_0180_normalised
+bookshelf/train/bookshelf_0438_normalised
+bookshelf/train/bookshelf_0128_normalised
+bookshelf/train/bookshelf_0445_normalised
+bookshelf/train/bookshelf_0231_normalised
+bookshelf/train/bookshelf_0198_normalised
+bookshelf/train/bookshelf_0179_normalised
+bookshelf/train/bookshelf_0464_normalised
+bookshelf/train/bookshelf_0510_normalised
+bookshelf/train/bookshelf_0460_normalised
+bookshelf/train/bookshelf_0230_normalised
+bookshelf/train/bookshelf_0429_normalised
+bookshelf/train/bookshelf_0244_normalised
+bookshelf/train/bookshelf_0386_normalised
+bookshelf/train/bookshelf_0466_normalised
+bookshelf/train/bookshelf_0048_normalised
+bookshelf/train/bookshelf_0041_normalised
+bookshelf/train/bookshelf_0052_normalised
+bookshelf/train/bookshelf_0129_normalised
+bookshelf/train/bookshelf_0530_normalised
+bookshelf/train/bookshelf_0024_normalised
+bookshelf/train/bookshelf_0372_normalised
+bookshelf/train/bookshelf_0239_normalised
+bookshelf/train/bookshelf_0418_normalised
+bookshelf/train/bookshelf_0351_normalised
+bookshelf/train/bookshelf_0213_normalised
+bookshelf/train/bookshelf_0327_normalised
+bookshelf/train/bookshelf_0206_normalised
+bookshelf/train/bookshelf_0310_normalised
+bookshelf/train/bookshelf_0124_normalised
+bookshelf/train/bookshelf_0238_normalised
+bookshelf/train/bookshelf_0275_normalised
+bookshelf/train/bookshelf_0204_normalised
+bookshelf/train/bookshelf_0118_normalised
+bookshelf/train/bookshelf_0143_normalised
+bookshelf/train/bookshelf_0295_normalised
+bookshelf/train/bookshelf_0029_normalised
+bookshelf/train/bookshelf_0309_normalised
+bookshelf/train/bookshelf_0044_normalised
+bookshelf/train/bookshelf_0103_normalised
+bookshelf/train/bookshelf_0152_normalised
+bookshelf/train/bookshelf_0369_normalised
+bookshelf/train/bookshelf_0262_normalised
+bookshelf/train/bookshelf_0057_normalised
+bookshelf/train/bookshelf_0088_normalised
+bookshelf/train/bookshelf_0155_normalised
+bookshelf/train/bookshelf_0160_normalised
+bookshelf/train/bookshelf_0379_normalised
+bookshelf/train/bookshelf_0431_normalised
+bookshelf/train/bookshelf_0224_normalised
+bookshelf/train/bookshelf_0544_normalised
+bookshelf/train/bookshelf_0076_normalised
+bookshelf/train/bookshelf_0540_normalised
+bookshelf/train/bookshelf_0071_normalised
+bookshelf/train/bookshelf_0562_normalised
+bookshelf/train/bookshelf_0417_normalised
+bookshelf/train/bookshelf_0338_normalised
+bookshelf/train/bookshelf_0298_normalised
+bookshelf/train/bookshelf_0047_normalised
+bookshelf/train/bookshelf_0086_normalised
+bookshelf/train/bookshelf_0359_normalised
+bookshelf/train/bookshelf_0325_normalised
+bookshelf/train/bookshelf_0270_normalised
+bookshelf/train/bookshelf_0046_normalised
+bookshelf/train/bookshelf_0564_normalised
+bookshelf/train/bookshelf_0377_normalised
+bookshelf/train/bookshelf_0364_normalised
+bookshelf/train/bookshelf_0342_normalised
+bookshelf/train/bookshelf_0435_normalised
+bookshelf/train/bookshelf_0246_normalised
+bookshelf/train/bookshelf_0407_normalised
+bookshelf/train/bookshelf_0091_normalised
+bookshelf/train/bookshelf_0404_normalised
+bookshelf/train/bookshelf_0463_normalised
+bookshelf/train/bookshelf_0037_normalised
+bookshelf/train/bookshelf_0490_normalised
+bookshelf/train/bookshelf_0382_normalised
+bookshelf/train/bookshelf_0322_normalised
+bookshelf/train/bookshelf_0258_normalised
+bookshelf/train/bookshelf_0144_normalised
+bookshelf/train/bookshelf_0440_normalised
+bookshelf/train/bookshelf_0294_normalised
+bookshelf/train/bookshelf_0538_normalised
+bookshelf/train/bookshelf_0395_normalised
+bookshelf/train/bookshelf_0214_normalised
+bookshelf/train/bookshelf_0376_normalised
+bookshelf/train/bookshelf_0090_normalised
+bookshelf/train/bookshelf_0097_normalised
+bookshelf/train/bookshelf_0388_normalised
+bookshelf/train/bookshelf_0526_normalised
+bookshelf/train/bookshelf_0119_normalised
+bookshelf/train/bookshelf_0375_normalised
+bookshelf/train/bookshelf_0002_normalised
+bookshelf/train/bookshelf_0153_normalised
+bookshelf/train/bookshelf_0451_normalised
+bookshelf/train/bookshelf_0255_normalised
+bookshelf/train/bookshelf_0471_normalised
+bookshelf/train/bookshelf_0164_normalised
+bookshelf/train/bookshelf_0551_normalised
+bookshelf/train/bookshelf_0248_normalised
+bookshelf/train/bookshelf_0043_normalised
+bookshelf/train/bookshelf_0405_normalised
+bookshelf/train/bookshelf_0330_normalised
+bookshelf/train/bookshelf_0095_normalised
+bookshelf/train/bookshelf_0477_normalised
+bookshelf/train/bookshelf_0186_normalised
+bookshelf/train/bookshelf_0362_normalised
+bookshelf/train/bookshelf_0313_normalised
+bookshelf/train/bookshelf_0513_normalised
+bookshelf/train/bookshelf_0109_normalised
+bookshelf/train/bookshelf_0106_normalised
+bookshelf/train/bookshelf_0078_normalised
+bookshelf/train/bookshelf_0079_normalised
+bookshelf/train/bookshelf_0114_normalised
+bookshelf/train/bookshelf_0273_normalised
+bookshelf/test/bookshelf_0593_normalised
+bookshelf/test/bookshelf_0641_normalised
+bookshelf/test/bookshelf_0661_normalised
+bookshelf/test/bookshelf_0586_normalised
+bookshelf/test/bookshelf_0576_normalised
+bookshelf/test/bookshelf_0636_normalised
+bookshelf/test/bookshelf_0580_normalised
+bookshelf/test/bookshelf_0655_normalised
+bookshelf/test/bookshelf_0606_normalised
+bookshelf/test/bookshelf_0582_normalised
+bookshelf/test/bookshelf_0634_normalised
+bookshelf/test/bookshelf_0583_normalised
+bookshelf/test/bookshelf_0660_normalised
+bookshelf/test/bookshelf_0578_normalised
+bookshelf/test/bookshelf_0653_normalised
+bookshelf/test/bookshelf_0605_normalised
+bookshelf/test/bookshelf_0625_normalised
+bookshelf/test/bookshelf_0642_normalised
+bookshelf/test/bookshelf_0631_normalised
+bookshelf/test/bookshelf_0652_normalised
+bookshelf/test/bookshelf_0672_normalised
+bookshelf/test/bookshelf_0633_normalised
+bookshelf/test/bookshelf_0585_normalised
+bookshelf/test/bookshelf_0665_normalised
+bookshelf/test/bookshelf_0626_normalised
+bookshelf/test/bookshelf_0623_normalised
+bookshelf/test/bookshelf_0591_normalised
+bookshelf/test/bookshelf_0615_normalised
+bookshelf/test/bookshelf_0656_normalised
+bookshelf/test/bookshelf_0599_normalised
+bookshelf/test/bookshelf_0670_normalised
+bookshelf/test/bookshelf_0596_normalised
+bookshelf/test/bookshelf_0619_normalised
+bookshelf/test/bookshelf_0592_normalised
+bookshelf/test/bookshelf_0646_normalised
+bookshelf/test/bookshelf_0657_normalised
+bookshelf/test/bookshelf_0640_normalised
+bookshelf/test/bookshelf_0662_normalised
+bookshelf/test/bookshelf_0643_normalised
+bookshelf/test/bookshelf_0666_normalised
+bookshelf/test/bookshelf_0603_normalised
+bookshelf/test/bookshelf_0618_normalised
+bookshelf/test/bookshelf_0645_normalised
+bookshelf/test/bookshelf_0663_normalised
+bookshelf/test/bookshelf_0573_normalised
+bookshelf/test/bookshelf_0613_normalised
+bookshelf/test/bookshelf_0664_normalised
+bookshelf/test/bookshelf_0581_normalised
+bookshelf/test/bookshelf_0669_normalised
+bookshelf/test/bookshelf_0629_normalised
+bookshelf/test/bookshelf_0609_normalised
+bookshelf/test/bookshelf_0590_normalised
+bookshelf/test/bookshelf_0651_normalised
+bookshelf/test/bookshelf_0659_normalised
+bookshelf/test/bookshelf_0635_normalised
+bookshelf/test/bookshelf_0639_normalised
+bookshelf/test/bookshelf_0579_normalised
+bookshelf/test/bookshelf_0647_normalised
+bookshelf/test/bookshelf_0602_normalised
+bookshelf/test/bookshelf_0671_normalised
+bookshelf/test/bookshelf_0617_normalised
+bookshelf/test/bookshelf_0598_normalised
+bookshelf/test/bookshelf_0614_normalised
+bookshelf/test/bookshelf_0611_normalised
+bookshelf/test/bookshelf_0607_normalised
+bookshelf/test/bookshelf_0638_normalised
+bookshelf/test/bookshelf_0616_normalised
+bookshelf/test/bookshelf_0587_normalised
+bookshelf/test/bookshelf_0589_normalised
+bookshelf/test/bookshelf_0627_normalised
+bookshelf/test/bookshelf_0600_normalised
+bookshelf/test/bookshelf_0648_normalised
+bookshelf/test/bookshelf_0575_normalised
+bookshelf/test/bookshelf_0610_normalised
+bookshelf/test/bookshelf_0637_normalised
+bookshelf/test/bookshelf_0654_normalised
+bookshelf/test/bookshelf_0630_normalised
+bookshelf/test/bookshelf_0597_normalised
+bookshelf/test/bookshelf_0612_normalised
+bookshelf/test/bookshelf_0628_normalised
+bookshelf/test/bookshelf_0667_normalised
+bookshelf/test/bookshelf_0588_normalised
+bookshelf/test/bookshelf_0604_normalised
+bookshelf/test/bookshelf_0601_normalised
+bookshelf/test/bookshelf_0649_normalised
+bookshelf/test/bookshelf_0584_normalised
+bookshelf/test/bookshelf_0658_normalised
+bookshelf/test/bookshelf_0650_normalised
+bookshelf/test/bookshelf_0632_normalised
+bookshelf/test/bookshelf_0668_normalised
+bookshelf/test/bookshelf_0621_normalised
+bookshelf/test/bookshelf_0574_normalised
+bookshelf/test/bookshelf_0622_normalised
+bookshelf/test/bookshelf_0624_normalised
+bookshelf/test/bookshelf_0608_normalised
+bookshelf/test/bookshelf_0595_normalised
+bookshelf/test/bookshelf_0644_normalised
+bookshelf/test/bookshelf_0577_normalised
+bookshelf/test/bookshelf_0620_normalised
+bookshelf/test/bookshelf_0594_normalised
+range_hood/train/range_hood_0007_normalised
+range_hood/train/range_hood_0004_normalised
+range_hood/train/range_hood_0071_normalised
+range_hood/train/range_hood_0085_normalised
+range_hood/train/range_hood_0082_normalised
+range_hood/train/range_hood_0062_normalised
+range_hood/train/range_hood_0094_normalised
+range_hood/train/range_hood_0070_normalised
+range_hood/train/range_hood_0086_normalised
+range_hood/train/range_hood_0100_normalised
+range_hood/train/range_hood_0078_normalised
+range_hood/train/range_hood_0073_normalised
+range_hood/train/range_hood_0002_normalised
+range_hood/train/range_hood_0010_normalised
+range_hood/train/range_hood_0059_normalised
+range_hood/train/range_hood_0111_normalised
+range_hood/train/range_hood_0009_normalised
+range_hood/train/range_hood_0081_normalised
+range_hood/train/range_hood_0058_normalised
+range_hood/train/range_hood_0074_normalised
+range_hood/train/range_hood_0075_normalised
+range_hood/train/range_hood_0099_normalised
+range_hood/train/range_hood_0034_normalised
+range_hood/train/range_hood_0108_normalised
+range_hood/train/range_hood_0106_normalised
+range_hood/train/range_hood_0112_normalised
+range_hood/train/range_hood_0026_normalised
+range_hood/train/range_hood_0023_normalised
+range_hood/train/range_hood_0088_normalised
+range_hood/train/range_hood_0030_normalised
+range_hood/train/range_hood_0057_normalised
+range_hood/train/range_hood_0008_normalised
+range_hood/train/range_hood_0063_normalised
+range_hood/train/range_hood_0068_normalised
+range_hood/train/range_hood_0056_normalised
+range_hood/train/range_hood_0115_normalised
+range_hood/train/range_hood_0084_normalised
+range_hood/train/range_hood_0113_normalised
+range_hood/train/range_hood_0098_normalised
+range_hood/train/range_hood_0067_normalised
+range_hood/train/range_hood_0079_normalised
+range_hood/train/range_hood_0095_normalised
+range_hood/train/range_hood_0047_normalised
+range_hood/train/range_hood_0038_normalised
+range_hood/train/range_hood_0083_normalised
+range_hood/train/range_hood_0045_normalised
+range_hood/train/range_hood_0065_normalised
+range_hood/train/range_hood_0014_normalised
+range_hood/train/range_hood_0043_normalised
+range_hood/train/range_hood_0041_normalised
+range_hood/train/range_hood_0022_normalised
+range_hood/train/range_hood_0064_normalised
+range_hood/train/range_hood_0114_normalised
+range_hood/train/range_hood_0091_normalised
+range_hood/train/range_hood_0087_normalised
+range_hood/train/range_hood_0016_normalised
+range_hood/train/range_hood_0049_normalised
+range_hood/train/range_hood_0072_normalised
+range_hood/train/range_hood_0066_normalised
+range_hood/train/range_hood_0097_normalised
+range_hood/train/range_hood_0051_normalised
+range_hood/train/range_hood_0046_normalised
+range_hood/train/range_hood_0040_normalised
+range_hood/train/range_hood_0054_normalised
+range_hood/train/range_hood_0017_normalised
+range_hood/train/range_hood_0025_normalised
+range_hood/train/range_hood_0048_normalised
+range_hood/train/range_hood_0027_normalised
+range_hood/train/range_hood_0101_normalised
+range_hood/train/range_hood_0052_normalised
+range_hood/train/range_hood_0102_normalised
+range_hood/train/range_hood_0018_normalised
+range_hood/train/range_hood_0060_normalised
+range_hood/train/range_hood_0042_normalised
+range_hood/train/range_hood_0105_normalised
+range_hood/train/range_hood_0080_normalised
+range_hood/train/range_hood_0001_normalised
+range_hood/train/range_hood_0076_normalised
+range_hood/train/range_hood_0107_normalised
+range_hood/train/range_hood_0019_normalised
+range_hood/train/range_hood_0003_normalised
+range_hood/train/range_hood_0035_normalised
+range_hood/train/range_hood_0044_normalised
+range_hood/train/range_hood_0103_normalised
+range_hood/train/range_hood_0104_normalised
+range_hood/train/range_hood_0110_normalised
+range_hood/train/range_hood_0055_normalised
+range_hood/train/range_hood_0031_normalised
+range_hood/train/range_hood_0021_normalised
+range_hood/train/range_hood_0077_normalised
+range_hood/train/range_hood_0037_normalised
+range_hood/train/range_hood_0024_normalised
+range_hood/train/range_hood_0061_normalised
+range_hood/train/range_hood_0013_normalised
+range_hood/train/range_hood_0036_normalised
+range_hood/train/range_hood_0089_normalised
+range_hood/train/range_hood_0053_normalised
+range_hood/train/range_hood_0006_normalised
+range_hood/train/range_hood_0096_normalised
+range_hood/train/range_hood_0020_normalised
+range_hood/train/range_hood_0069_normalised
+range_hood/train/range_hood_0050_normalised
+range_hood/train/range_hood_0039_normalised
+range_hood/train/range_hood_0015_normalised
+range_hood/train/range_hood_0028_normalised
+range_hood/train/range_hood_0011_normalised
+range_hood/train/range_hood_0032_normalised
+range_hood/train/range_hood_0012_normalised
+range_hood/train/range_hood_0090_normalised
+range_hood/train/range_hood_0092_normalised
+range_hood/train/range_hood_0029_normalised
+range_hood/train/range_hood_0033_normalised
+range_hood/train/range_hood_0005_normalised
+range_hood/train/range_hood_0109_normalised
+range_hood/train/range_hood_0093_normalised
+range_hood/test/range_hood_0147_normalised
+range_hood/test/range_hood_0191_normalised
+range_hood/test/range_hood_0177_normalised
+range_hood/test/range_hood_0175_normalised
+range_hood/test/range_hood_0158_normalised
+range_hood/test/range_hood_0199_normalised
+range_hood/test/range_hood_0197_normalised
+range_hood/test/range_hood_0132_normalised
+range_hood/test/range_hood_0195_normalised
+range_hood/test/range_hood_0213_normalised
+range_hood/test/range_hood_0176_normalised
+range_hood/test/range_hood_0211_normalised
+range_hood/test/range_hood_0172_normalised
+range_hood/test/range_hood_0193_normalised
+range_hood/test/range_hood_0187_normalised
+range_hood/test/range_hood_0135_normalised
+range_hood/test/range_hood_0201_normalised
+range_hood/test/range_hood_0128_normalised
+range_hood/test/range_hood_0168_normalised
+range_hood/test/range_hood_0136_normalised
+range_hood/test/range_hood_0162_normalised
+range_hood/test/range_hood_0131_normalised
+range_hood/test/range_hood_0138_normalised
+range_hood/test/range_hood_0208_normalised
+range_hood/test/range_hood_0146_normalised
+range_hood/test/range_hood_0188_normalised
+range_hood/test/range_hood_0167_normalised
+range_hood/test/range_hood_0182_normalised
+range_hood/test/range_hood_0129_normalised
+range_hood/test/range_hood_0181_normalised
+range_hood/test/range_hood_0170_normalised
+range_hood/test/range_hood_0203_normalised
+range_hood/test/range_hood_0123_normalised
+range_hood/test/range_hood_0127_normalised
+range_hood/test/range_hood_0121_normalised
+range_hood/test/range_hood_0154_normalised
+range_hood/test/range_hood_0134_normalised
+range_hood/test/range_hood_0156_normalised
+range_hood/test/range_hood_0185_normalised
+range_hood/test/range_hood_0133_normalised
+range_hood/test/range_hood_0215_normalised
+range_hood/test/range_hood_0173_normalised
+range_hood/test/range_hood_0120_normalised
+range_hood/test/range_hood_0184_normalised
+range_hood/test/range_hood_0148_normalised
+range_hood/test/range_hood_0165_normalised
+range_hood/test/range_hood_0119_normalised
+range_hood/test/range_hood_0166_normalised
+range_hood/test/range_hood_0143_normalised
+range_hood/test/range_hood_0153_normalised
+range_hood/test/range_hood_0152_normalised
+range_hood/test/range_hood_0212_normalised
+range_hood/test/range_hood_0186_normalised
+range_hood/test/range_hood_0137_normalised
+range_hood/test/range_hood_0116_normalised
+range_hood/test/range_hood_0125_normalised
+range_hood/test/range_hood_0141_normalised
+range_hood/test/range_hood_0117_normalised
+range_hood/test/range_hood_0210_normalised
+range_hood/test/range_hood_0163_normalised
+range_hood/test/range_hood_0206_normalised
+range_hood/test/range_hood_0196_normalised
+range_hood/test/range_hood_0161_normalised
+range_hood/test/range_hood_0160_normalised
+range_hood/test/range_hood_0118_normalised
+range_hood/test/range_hood_0150_normalised
+range_hood/test/range_hood_0178_normalised
+range_hood/test/range_hood_0207_normalised
+range_hood/test/range_hood_0122_normalised
+range_hood/test/range_hood_0174_normalised
+range_hood/test/range_hood_0169_normalised
+range_hood/test/range_hood_0204_normalised
+range_hood/test/range_hood_0192_normalised
+range_hood/test/range_hood_0205_normalised
+range_hood/test/range_hood_0202_normalised
+range_hood/test/range_hood_0214_normalised
+range_hood/test/range_hood_0126_normalised
+range_hood/test/range_hood_0183_normalised
+range_hood/test/range_hood_0171_normalised
+range_hood/test/range_hood_0189_normalised
+range_hood/test/range_hood_0124_normalised
+range_hood/test/range_hood_0130_normalised
+range_hood/test/range_hood_0190_normalised
+range_hood/test/range_hood_0140_normalised
+range_hood/test/range_hood_0149_normalised
+range_hood/test/range_hood_0144_normalised
+range_hood/test/range_hood_0200_normalised
+range_hood/test/range_hood_0159_normalised
+range_hood/test/range_hood_0151_normalised
+range_hood/test/range_hood_0198_normalised
+range_hood/test/range_hood_0164_normalised
+range_hood/test/range_hood_0157_normalised
+range_hood/test/range_hood_0142_normalised
+range_hood/test/range_hood_0139_normalised
+range_hood/test/range_hood_0145_normalised
+range_hood/test/range_hood_0180_normalised
+range_hood/test/range_hood_0209_normalised
+range_hood/test/range_hood_0194_normalised
+range_hood/test/range_hood_0179_normalised
+range_hood/test/range_hood_0155_normalised
+curtain/train/curtain_0112_normalised
+curtain/train/curtain_0120_normalised
+curtain/train/curtain_0017_normalised
+curtain/train/curtain_0085_normalised
+curtain/train/curtain_0040_normalised
+curtain/train/curtain_0070_normalised
+curtain/train/curtain_0035_normalised
+curtain/train/curtain_0045_normalised
+curtain/train/curtain_0057_normalised
+curtain/train/curtain_0029_normalised
+curtain/train/curtain_0062_normalised
+curtain/train/curtain_0049_normalised
+curtain/train/curtain_0064_normalised
+curtain/train/curtain_0109_normalised
+curtain/train/curtain_0126_normalised
+curtain/train/curtain_0113_normalised
+curtain/train/curtain_0059_normalised
+curtain/train/curtain_0013_normalised
+curtain/train/curtain_0079_normalised
+curtain/train/curtain_0006_normalised
+curtain/train/curtain_0076_normalised
+curtain/train/curtain_0004_normalised
+curtain/train/curtain_0005_normalised
+curtain/train/curtain_0131_normalised
+curtain/train/curtain_0106_normalised
+curtain/train/curtain_0023_normalised
+curtain/train/curtain_0127_normalised
+curtain/train/curtain_0134_normalised
+curtain/train/curtain_0010_normalised
+curtain/train/curtain_0003_normalised
+curtain/train/curtain_0025_normalised
+curtain/train/curtain_0055_normalised
+curtain/train/curtain_0038_normalised
+curtain/train/curtain_0100_normalised
+curtain/train/curtain_0110_normalised
+curtain/train/curtain_0051_normalised
+curtain/train/curtain_0119_normalised
+curtain/train/curtain_0081_normalised
+curtain/train/curtain_0090_normalised
+curtain/train/curtain_0101_normalised
+curtain/train/curtain_0033_normalised
+curtain/train/curtain_0103_normalised
+curtain/train/curtain_0111_normalised
+curtain/train/curtain_0125_normalised
+curtain/train/curtain_0044_normalised
+curtain/train/curtain_0014_normalised
+curtain/train/curtain_0077_normalised
+curtain/train/curtain_0097_normalised
+curtain/train/curtain_0030_normalised
+curtain/train/curtain_0034_normalised
+curtain/train/curtain_0105_normalised
+curtain/train/curtain_0063_normalised
+curtain/train/curtain_0130_normalised
+curtain/train/curtain_0115_normalised
+curtain/train/curtain_0020_normalised
+curtain/train/curtain_0102_normalised
+curtain/train/curtain_0080_normalised
+curtain/train/curtain_0123_normalised
+curtain/train/curtain_0069_normalised
+curtain/train/curtain_0118_normalised
+curtain/train/curtain_0091_normalised
+curtain/train/curtain_0031_normalised
+curtain/train/curtain_0015_normalised
+curtain/train/curtain_0022_normalised
+curtain/train/curtain_0032_normalised
+curtain/train/curtain_0009_normalised
+curtain/train/curtain_0104_normalised
+curtain/train/curtain_0007_normalised
+curtain/train/curtain_0067_normalised
+curtain/train/curtain_0065_normalised
+curtain/train/curtain_0018_normalised
+curtain/train/curtain_0053_normalised
+curtain/train/curtain_0066_normalised
+curtain/train/curtain_0050_normalised
+curtain/train/curtain_0072_normalised
+curtain/train/curtain_0060_normalised
+curtain/train/curtain_0078_normalised
+curtain/train/curtain_0089_normalised
+curtain/train/curtain_0046_normalised
+curtain/train/curtain_0129_normalised
+curtain/train/curtain_0021_normalised
+curtain/train/curtain_0073_normalised
+curtain/train/curtain_0107_normalised
+curtain/train/curtain_0099_normalised
+curtain/train/curtain_0132_normalised
+curtain/train/curtain_0094_normalised
+curtain/train/curtain_0002_normalised
+curtain/train/curtain_0012_normalised
+curtain/train/curtain_0117_normalised
+curtain/train/curtain_0086_normalised
+curtain/train/curtain_0121_normalised
+curtain/train/curtain_0019_normalised
+curtain/train/curtain_0052_normalised
+curtain/train/curtain_0028_normalised
+curtain/train/curtain_0037_normalised
+curtain/train/curtain_0092_normalised
+curtain/train/curtain_0088_normalised
+curtain/train/curtain_0137_normalised
+curtain/train/curtain_0133_normalised
+curtain/train/curtain_0096_normalised
+curtain/train/curtain_0054_normalised
+curtain/train/curtain_0047_normalised
+curtain/train/curtain_0136_normalised
+curtain/train/curtain_0016_normalised
+curtain/train/curtain_0128_normalised
+curtain/train/curtain_0042_normalised
+curtain/train/curtain_0056_normalised
+curtain/train/curtain_0108_normalised
+curtain/train/curtain_0093_normalised
+curtain/train/curtain_0026_normalised
+curtain/train/curtain_0043_normalised
+curtain/train/curtain_0074_normalised
+curtain/train/curtain_0082_normalised
+curtain/train/curtain_0061_normalised
+curtain/train/curtain_0122_normalised
+curtain/train/curtain_0116_normalised
+curtain/train/curtain_0027_normalised
+curtain/train/curtain_0084_normalised
+curtain/train/curtain_0068_normalised
+curtain/train/curtain_0024_normalised
+curtain/train/curtain_0001_normalised
+curtain/train/curtain_0058_normalised
+curtain/train/curtain_0087_normalised
+curtain/train/curtain_0039_normalised
+curtain/train/curtain_0008_normalised
+curtain/train/curtain_0124_normalised
+curtain/train/curtain_0071_normalised
+curtain/train/curtain_0048_normalised
+curtain/train/curtain_0011_normalised
+curtain/train/curtain_0098_normalised
+curtain/train/curtain_0135_normalised
+curtain/train/curtain_0075_normalised
+curtain/train/curtain_0041_normalised
+curtain/train/curtain_0114_normalised
+curtain/train/curtain_0083_normalised
+curtain/train/curtain_0138_normalised
+curtain/train/curtain_0095_normalised
+curtain/train/curtain_0036_normalised
+curtain/test/curtain_0155_normalised
+curtain/test/curtain_0148_normalised
+curtain/test/curtain_0147_normalised
+curtain/test/curtain_0156_normalised
+curtain/test/curtain_0151_normalised
+curtain/test/curtain_0145_normalised
+curtain/test/curtain_0140_normalised
+curtain/test/curtain_0139_normalised
+curtain/test/curtain_0146_normalised
+curtain/test/curtain_0157_normalised
+curtain/test/curtain_0149_normalised
+curtain/test/curtain_0150_normalised
+curtain/test/curtain_0158_normalised
+curtain/test/curtain_0142_normalised
+curtain/test/curtain_0143_normalised
+curtain/test/curtain_0152_normalised
+curtain/test/curtain_0144_normalised
+curtain/test/curtain_0141_normalised
+curtain/test/curtain_0153_normalised
+curtain/test/curtain_0154_normalised
+lamp/train/lamp_0080_normalised
+lamp/train/lamp_0097_normalised
+lamp/train/lamp_0003_normalised
+lamp/train/lamp_0067_normalised
+lamp/train/lamp_0064_normalised
+lamp/train/lamp_0096_normalised
+lamp/train/lamp_0086_normalised
+lamp/train/lamp_0025_normalised
+lamp/train/lamp_0010_normalised
+lamp/train/lamp_0069_normalised
+lamp/train/lamp_0021_normalised
+lamp/train/lamp_0081_normalised
+lamp/train/lamp_0065_normalised
+lamp/train/lamp_0014_normalised
+lamp/train/lamp_0050_normalised
+lamp/train/lamp_0005_normalised
+lamp/train/lamp_0049_normalised
+lamp/train/lamp_0104_normalised
+lamp/train/lamp_0115_normalised
+lamp/train/lamp_0071_normalised
+lamp/train/lamp_0002_normalised
+lamp/train/lamp_0092_normalised
+lamp/train/lamp_0118_normalised
+lamp/train/lamp_0026_normalised
+lamp/train/lamp_0033_normalised
+lamp/train/lamp_0121_normalised
+lamp/train/lamp_0023_normalised
+lamp/train/lamp_0112_normalised
+lamp/train/lamp_0113_normalised
+lamp/train/lamp_0108_normalised
+lamp/train/lamp_0011_normalised
+lamp/train/lamp_0109_normalised
+lamp/train/lamp_0004_normalised
+lamp/train/lamp_0106_normalised
+lamp/train/lamp_0060_normalised
+lamp/train/lamp_0123_normalised
+lamp/train/lamp_0043_normalised
+lamp/train/lamp_0099_normalised
+lamp/train/lamp_0034_normalised
+lamp/train/lamp_0012_normalised
+lamp/train/lamp_0070_normalised
+lamp/train/lamp_0039_normalised
+lamp/train/lamp_0101_normalised
+lamp/train/lamp_0015_normalised
+lamp/train/lamp_0045_normalised
+lamp/train/lamp_0020_normalised
+lamp/train/lamp_0105_normalised
+lamp/train/lamp_0051_normalised
+lamp/train/lamp_0055_normalised
+lamp/train/lamp_0124_normalised
+lamp/train/lamp_0075_normalised
+lamp/train/lamp_0040_normalised
+lamp/train/lamp_0046_normalised
+lamp/train/lamp_0114_normalised
+lamp/train/lamp_0116_normalised
+lamp/train/lamp_0052_normalised
+lamp/train/lamp_0035_normalised
+lamp/train/lamp_0077_normalised
+lamp/train/lamp_0062_normalised
+lamp/train/lamp_0042_normalised
+lamp/train/lamp_0009_normalised
+lamp/train/lamp_0074_normalised
+lamp/train/lamp_0028_normalised
+lamp/train/lamp_0054_normalised
+lamp/train/lamp_0122_normalised
+lamp/train/lamp_0044_normalised
+lamp/train/lamp_0036_normalised
+lamp/train/lamp_0102_normalised
+lamp/train/lamp_0001_normalised
+lamp/train/lamp_0037_normalised
+lamp/train/lamp_0117_normalised
+lamp/train/lamp_0018_normalised
+lamp/train/lamp_0022_normalised
+lamp/train/lamp_0017_normalised
+lamp/train/lamp_0058_normalised
+lamp/train/lamp_0119_normalised
+lamp/train/lamp_0076_normalised
+lamp/train/lamp_0082_normalised
+lamp/train/lamp_0007_normalised
+lamp/train/lamp_0029_normalised
+lamp/train/lamp_0041_normalised
+lamp/train/lamp_0024_normalised
+lamp/train/lamp_0089_normalised
+lamp/train/lamp_0061_normalised
+lamp/train/lamp_0031_normalised
+lamp/train/lamp_0059_normalised
+lamp/train/lamp_0088_normalised
+lamp/train/lamp_0006_normalised
+lamp/train/lamp_0120_normalised
+lamp/train/lamp_0072_normalised
+lamp/train/lamp_0016_normalised
+lamp/train/lamp_0053_normalised
+lamp/train/lamp_0079_normalised
+lamp/train/lamp_0093_normalised
+lamp/train/lamp_0063_normalised
+lamp/train/lamp_0103_normalised
+lamp/train/lamp_0056_normalised
+lamp/train/lamp_0094_normalised
+lamp/train/lamp_0090_normalised
+lamp/train/lamp_0048_normalised
+lamp/train/lamp_0066_normalised
+lamp/train/lamp_0057_normalised
+lamp/train/lamp_0068_normalised
+lamp/train/lamp_0084_normalised
+lamp/train/lamp_0110_normalised
+lamp/train/lamp_0100_normalised
+lamp/train/lamp_0013_normalised
+lamp/train/lamp_0008_normalised
+lamp/train/lamp_0030_normalised
+lamp/train/lamp_0107_normalised
+lamp/train/lamp_0047_normalised
+lamp/train/lamp_0073_normalised
+lamp/train/lamp_0038_normalised
+lamp/train/lamp_0095_normalised
+lamp/train/lamp_0019_normalised
+lamp/train/lamp_0087_normalised
+lamp/train/lamp_0027_normalised
+lamp/train/lamp_0098_normalised
+lamp/train/lamp_0078_normalised
+lamp/train/lamp_0111_normalised
+lamp/train/lamp_0085_normalised
+lamp/train/lamp_0032_normalised
+lamp/train/lamp_0083_normalised
+lamp/train/lamp_0091_normalised
+lamp/test/lamp_0139_normalised
+lamp/test/lamp_0137_normalised
+lamp/test/lamp_0141_normalised
+lamp/test/lamp_0128_normalised
+lamp/test/lamp_0135_normalised
+lamp/test/lamp_0136_normalised
+lamp/test/lamp_0129_normalised
+lamp/test/lamp_0130_normalised
+lamp/test/lamp_0134_normalised
+lamp/test/lamp_0125_normalised
+lamp/test/lamp_0133_normalised
+lamp/test/lamp_0143_normalised
+lamp/test/lamp_0132_normalised
+lamp/test/lamp_0140_normalised
+lamp/test/lamp_0127_normalised
+lamp/test/lamp_0138_normalised
+lamp/test/lamp_0131_normalised
+lamp/test/lamp_0126_normalised
+lamp/test/lamp_0142_normalised
+lamp/test/lamp_0144_normalised
+sofa/train/sofa_0524_normalised
+sofa/train/sofa_0266_normalised
+sofa/train/sofa_0231_normalised
+sofa/train/sofa_0603_normalised
+sofa/train/sofa_0213_normalised
+sofa/train/sofa_0302_normalised
+sofa/train/sofa_0363_normalised
+sofa/train/sofa_0321_normalised
+sofa/train/sofa_0250_normalised
+sofa/train/sofa_0580_normalised
+sofa/train/sofa_0500_normalised
+sofa/train/sofa_0598_normalised
+sofa/train/sofa_0254_normalised
+sofa/train/sofa_0138_normalised
+sofa/train/sofa_0563_normalised
+sofa/train/sofa_0523_normalised
+sofa/train/sofa_0463_normalised
+sofa/train/sofa_0480_normalised
+sofa/train/sofa_0495_normalised
+sofa/train/sofa_0600_normalised
+sofa/train/sofa_0605_normalised
+sofa/train/sofa_0537_normalised
+sofa/train/sofa_0064_normalised
+sofa/train/sofa_0437_normalised
+sofa/train/sofa_0140_normalised
+sofa/train/sofa_0207_normalised
+sofa/train/sofa_0271_normalised
+sofa/train/sofa_0420_normalised
+sofa/train/sofa_0583_normalised
+sofa/train/sofa_0101_normalised
+sofa/train/sofa_0335_normalised
+sofa/train/sofa_0072_normalised
+sofa/train/sofa_0385_normalised
+sofa/train/sofa_0134_normalised
+sofa/train/sofa_0499_normalised
+sofa/train/sofa_0431_normalised
+sofa/train/sofa_0505_normalised
+sofa/train/sofa_0105_normalised
+sofa/train/sofa_0085_normalised
+sofa/train/sofa_0533_normalised
+sofa/train/sofa_0285_normalised
+sofa/train/sofa_0208_normalised
+sofa/train/sofa_0453_normalised
+sofa/train/sofa_0538_normalised
+sofa/train/sofa_0375_normalised
+sofa/train/sofa_0651_normalised
+sofa/train/sofa_0123_normalised
+sofa/train/sofa_0568_normalised
+sofa/train/sofa_0345_normalised
+sofa/train/sofa_0159_normalised
+sofa/train/sofa_0104_normalised
+sofa/train/sofa_0057_normalised
+sofa/train/sofa_0676_normalised
+sofa/train/sofa_0026_normalised
+sofa/train/sofa_0680_normalised
+sofa/train/sofa_0476_normalised
+sofa/train/sofa_0395_normalised
+sofa/train/sofa_0181_normalised
+sofa/train/sofa_0392_normalised
+sofa/train/sofa_0263_normalised
+sofa/train/sofa_0403_normalised
+sofa/train/sofa_0016_normalised
+sofa/train/sofa_0434_normalised
+sofa/train/sofa_0402_normalised
+sofa/train/sofa_0135_normalised
+sofa/train/sofa_0358_normalised
+sofa/train/sofa_0655_normalised
+sofa/train/sofa_0005_normalised
+sofa/train/sofa_0577_normalised
+sofa/train/sofa_0474_normalised
+sofa/train/sofa_0338_normalised
+sofa/train/sofa_0118_normalised
+sofa/train/sofa_0667_normalised
+sofa/train/sofa_0212_normalised
+sofa/train/sofa_0449_normalised
+sofa/train/sofa_0226_normalised
+sofa/train/sofa_0107_normalised
+sofa/train/sofa_0171_normalised
+sofa/train/sofa_0289_normalised
+sofa/train/sofa_0306_normalised
+sofa/train/sofa_0531_normalised
+sofa/train/sofa_0184_normalised
+sofa/train/sofa_0498_normalised
+sofa/train/sofa_0071_normalised
+sofa/train/sofa_0004_normalised
+sofa/train/sofa_0478_normalised
+sofa/train/sofa_0633_normalised
+sofa/train/sofa_0574_normalised
+sofa/train/sofa_0415_normalised
+sofa/train/sofa_0643_normalised
+sofa/train/sofa_0006_normalised
+sofa/train/sofa_0047_normalised
+sofa/train/sofa_0336_normalised
+sofa/train/sofa_0330_normalised
+sofa/train/sofa_0548_normalised
+sofa/train/sofa_0187_normalised
+sofa/train/sofa_0354_normalised
+sofa/train/sofa_0236_normalised
+sofa/train/sofa_0353_normalised
+sofa/train/sofa_0562_normalised
+sofa/train/sofa_0086_normalised
+sofa/train/sofa_0364_normalised
+sofa/train/sofa_0074_normalised
+sofa/train/sofa_0111_normalised
+sofa/train/sofa_0219_normalised
+sofa/train/sofa_0002_normalised
+sofa/train/sofa_0240_normalised
+sofa/train/sofa_0235_normalised
+sofa/train/sofa_0220_normalised
+sofa/train/sofa_0146_normalised
+sofa/train/sofa_0648_normalised
+sofa/train/sofa_0114_normalised
+sofa/train/sofa_0261_normalised
+sofa/train/sofa_0397_normalised
+sofa/train/sofa_0625_normalised
+sofa/train/sofa_0435_normalised
+sofa/train/sofa_0063_normalised
+sofa/train/sofa_0637_normalised
+sofa/train/sofa_0339_normalised
+sofa/train/sofa_0060_normalised
+sofa/train/sofa_0329_normalised
+sofa/train/sofa_0148_normalised
+sofa/train/sofa_0630_normalised
+sofa/train/sofa_0645_normalised
+sofa/train/sofa_0209_normalised
+sofa/train/sofa_0416_normalised
+sofa/train/sofa_0546_normalised
+sofa/train/sofa_0445_normalised
+sofa/train/sofa_0594_normalised
+sofa/train/sofa_0305_normalised
+sofa/train/sofa_0639_normalised
+sofa/train/sofa_0507_normalised
+sofa/train/sofa_0555_normalised
+sofa/train/sofa_0422_normalised
+sofa/train/sofa_0620_normalised
+sofa/train/sofa_0539_normalised
+sofa/train/sofa_0659_normalised
+sofa/train/sofa_0334_normalised
+sofa/train/sofa_0485_normalised
+sofa/train/sofa_0188_normalised
+sofa/train/sofa_0356_normalised
+sofa/train/sofa_0095_normalised
+sofa/train/sofa_0242_normalised
+sofa/train/sofa_0526_normalised
+sofa/train/sofa_0227_normalised
+sofa/train/sofa_0357_normalised
+sofa/train/sofa_0052_normalised
+sofa/train/sofa_0039_normalised
+sofa/train/sofa_0493_normalised
+sofa/train/sofa_0458_normalised
+sofa/train/sofa_0679_normalised
+sofa/train/sofa_0650_normalised
+sofa/train/sofa_0253_normalised
+sofa/train/sofa_0588_normalised
+sofa/train/sofa_0021_normalised
+sofa/train/sofa_0670_normalised
+sofa/train/sofa_0618_normalised
+sofa/train/sofa_0328_normalised
+sofa/train/sofa_0280_normalised
+sofa/train/sofa_0319_normalised
+sofa/train/sofa_0121_normalised
+sofa/train/sofa_0178_normalised
+sofa/train/sofa_0582_normalised
+sofa/train/sofa_0668_normalised
+sofa/train/sofa_0264_normalised
+sofa/train/sofa_0126_normalised
+sofa/train/sofa_0469_normalised
+sofa/train/sofa_0077_normalised
+sofa/train/sofa_0491_normalised
+sofa/train/sofa_0003_normalised
+sofa/train/sofa_0542_normalised
+sofa/train/sofa_0438_normalised
+sofa/train/sofa_0108_normalised
+sofa/train/sofa_0520_normalised
+sofa/train/sofa_0015_normalised
+sofa/train/sofa_0406_normalised
+sofa/train/sofa_0619_normalised
+sofa/train/sofa_0366_normalised
+sofa/train/sofa_0087_normalised
+sofa/train/sofa_0565_normalised
+sofa/train/sofa_0622_normalised
+sofa/train/sofa_0534_normalised
+sofa/train/sofa_0599_normalised
+sofa/train/sofa_0048_normalised
+sofa/train/sofa_0669_normalised
+sofa/train/sofa_0545_normalised
+sofa/train/sofa_0607_normalised
+sofa/train/sofa_0117_normalised
+sofa/train/sofa_0233_normalised
+sofa/train/sofa_0200_normalised
+sofa/train/sofa_0251_normalised
+sofa/train/sofa_0125_normalised
+sofa/train/sofa_0404_normalised
+sofa/train/sofa_0094_normalised
+sofa/train/sofa_0008_normalised
+sofa/train/sofa_0410_normalised
+sofa/train/sofa_0165_normalised
+sofa/train/sofa_0279_normalised
+sofa/train/sofa_0372_normalised
+sofa/train/sofa_0059_normalised
+sofa/train/sofa_0230_normalised
+sofa/train/sofa_0528_normalised
+sofa/train/sofa_0036_normalised
+sofa/train/sofa_0567_normalised
+sofa/train/sofa_0274_normalised
+sofa/train/sofa_0082_normalised
+sofa/train/sofa_0061_normalised
+sofa/train/sofa_0044_normalised
+sofa/train/sofa_0023_normalised
+sofa/train/sofa_0423_normalised
+sofa/train/sofa_0647_normalised
+sofa/train/sofa_0483_normalised
+sofa/train/sofa_0326_normalised
+sofa/train/sofa_0624_normalised
+sofa/train/sofa_0193_normalised
+sofa/train/sofa_0374_normalised
+sofa/train/sofa_0183_normalised
+sofa/train/sofa_0443_normalised
+sofa/train/sofa_0065_normalised
+sofa/train/sofa_0079_normalised
+sofa/train/sofa_0459_normalised
+sofa/train/sofa_0020_normalised
+sofa/train/sofa_0387_normalised
+sofa/train/sofa_0382_normalised
+sofa/train/sofa_0653_normalised
+sofa/train/sofa_0166_normalised
+sofa/train/sofa_0649_normalised
+sofa/train/sofa_0391_normalised
+sofa/train/sofa_0228_normalised
+sofa/train/sofa_0269_normalised
+sofa/train/sofa_0216_normalised
+sofa/train/sofa_0475_normalised
+sofa/train/sofa_0652_normalised
+sofa/train/sofa_0572_normalised
+sofa/train/sofa_0056_normalised
+sofa/train/sofa_0656_normalised
+sofa/train/sofa_0465_normalised
+sofa/train/sofa_0013_normalised
+sofa/train/sofa_0284_normalised
+sofa/train/sofa_0073_normalised
+sofa/train/sofa_0189_normalised
+sofa/train/sofa_0031_normalised
+sofa/train/sofa_0610_normalised
+sofa/train/sofa_0303_normalised
+sofa/train/sofa_0540_normalised
+sofa/train/sofa_0185_normalised
+sofa/train/sofa_0393_normalised
+sofa/train/sofa_0448_normalised
+sofa/train/sofa_0578_normalised
+sofa/train/sofa_0130_normalised
+sofa/train/sofa_0611_normalised
+sofa/train/sofa_0143_normalised
+sofa/train/sofa_0541_normalised
+sofa/train/sofa_0218_normalised
+sofa/train/sofa_0313_normalised
+sofa/train/sofa_0509_normalised
+sofa/train/sofa_0199_normalised
+sofa/train/sofa_0139_normalised
+sofa/train/sofa_0232_normalised
+sofa/train/sofa_0112_normalised
+sofa/train/sofa_0055_normalised
+sofa/train/sofa_0262_normalised
+sofa/train/sofa_0592_normalised
+sofa/train/sofa_0311_normalised
+sofa/train/sofa_0037_normalised
+sofa/train/sofa_0497_normalised
+sofa/train/sofa_0151_normalised
+sofa/train/sofa_0535_normalised
+sofa/train/sofa_0191_normalised
+sofa/train/sofa_0051_normalised
+sofa/train/sofa_0482_normalised
+sofa/train/sofa_0045_normalised
+sofa/train/sofa_0040_normalised
+sofa/train/sofa_0247_normalised
+sofa/train/sofa_0342_normalised
+sofa/train/sofa_0341_normalised
+sofa/train/sofa_0672_normalised
+sofa/train/sofa_0384_normalised
+sofa/train/sofa_0564_normalised
+sofa/train/sofa_0323_normalised
+sofa/train/sofa_0286_normalised
+sofa/train/sofa_0029_normalised
+sofa/train/sofa_0355_normalised
+sofa/train/sofa_0514_normalised
+sofa/train/sofa_0456_normalised
+sofa/train/sofa_0506_normalised
+sofa/train/sofa_0025_normalised
+sofa/train/sofa_0246_normalised
+sofa/train/sofa_0634_normalised
+sofa/train/sofa_0440_normalised
+sofa/train/sofa_0383_normalised
+sofa/train/sofa_0359_normalised
+sofa/train/sofa_0141_normalised
+sofa/train/sofa_0642_normalised
+sofa/train/sofa_0549_normalised
+sofa/train/sofa_0615_normalised
+sofa/train/sofa_0129_normalised
+sofa/train/sofa_0237_normalised
+sofa/train/sofa_0333_normalised
+sofa/train/sofa_0593_normalised
+sofa/train/sofa_0462_normalised
+sofa/train/sofa_0373_normalised
+sofa/train/sofa_0490_normalised
+sofa/train/sofa_0277_normalised
+sofa/train/sofa_0194_normalised
+sofa/train/sofa_0602_normalised
+sofa/train/sofa_0290_normalised
+sofa/train/sofa_0217_normalised
+sofa/train/sofa_0124_normalised
+sofa/train/sofa_0042_normalised
+sofa/train/sofa_0252_normalised
+sofa/train/sofa_0612_normalised
+sofa/train/sofa_0557_normalised
+sofa/train/sofa_0584_normalised
+sofa/train/sofa_0314_normalised
+sofa/train/sofa_0152_normalised
+sofa/train/sofa_0024_normalised
+sofa/train/sofa_0128_normalised
+sofa/train/sofa_0674_normalised
+sofa/train/sofa_0346_normalised
+sofa/train/sofa_0399_normalised
+sofa/train/sofa_0489_normalised
+sofa/train/sofa_0267_normalised
+sofa/train/sofa_0521_normalised
+sofa/train/sofa_0309_normalised
+sofa/train/sofa_0405_normalised
+sofa/train/sofa_0283_normalised
+sofa/train/sofa_0433_normalised
+sofa/train/sofa_0481_normalised
+sofa/train/sofa_0089_normalised
+sofa/train/sofa_0041_normalised
+sofa/train/sofa_0110_normalised
+sofa/train/sofa_0627_normalised
+sofa/train/sofa_0424_normalised
+sofa/train/sofa_0102_normalised
+sofa/train/sofa_0075_normalised
+sofa/train/sofa_0398_normalised
+sofa/train/sofa_0512_normalised
+sofa/train/sofa_0278_normalised
+sofa/train/sofa_0367_normalised
+sofa/train/sofa_0062_normalised
+sofa/train/sofa_0461_normalised
+sofa/train/sofa_0205_normalised
+sofa/train/sofa_0394_normalised
+sofa/train/sofa_0202_normalised
+sofa/train/sofa_0249_normalised
+sofa/train/sofa_0517_normalised
+sofa/train/sofa_0597_normalised
+sofa/train/sofa_0487_normalised
+sofa/train/sofa_0457_normalised
+sofa/train/sofa_0030_normalised
+sofa/train/sofa_0093_normalised
+sofa/train/sofa_0631_normalised
+sofa/train/sofa_0477_normalised
+sofa/train/sofa_0301_normalised
+sofa/train/sofa_0516_normalised
+sofa/train/sofa_0617_normalised
+sofa/train/sofa_0378_normalised
+sofa/train/sofa_0273_normalised
+sofa/train/sofa_0221_normalised
+sofa/train/sofa_0147_normalised
+sofa/train/sofa_0558_normalised
+sofa/train/sofa_0629_normalised
+sofa/train/sofa_0070_normalised
+sofa/train/sofa_0590_normalised
+sofa/train/sofa_0100_normalised
+sofa/train/sofa_0408_normalised
+sofa/train/sofa_0352_normalised
+sofa/train/sofa_0197_normalised
+sofa/train/sofa_0662_normalised
+sofa/train/sofa_0310_normalised
+sofa/train/sofa_0164_normalised
+sofa/train/sofa_0362_normalised
+sofa/train/sofa_0360_normalised
+sofa/train/sofa_0451_normalised
+sofa/train/sofa_0131_normalised
+sofa/train/sofa_0376_normalised
+sofa/train/sofa_0556_normalised
+sofa/train/sofa_0587_normalised
+sofa/train/sofa_0413_normalised
+sofa/train/sofa_0348_normalised
+sofa/train/sofa_0054_normalised
+sofa/train/sofa_0017_normalised
+sofa/train/sofa_0479_normalised
+sofa/train/sofa_0460_normalised
+sofa/train/sofa_0494_normalised
+sofa/train/sofa_0179_normalised
+sofa/train/sofa_0613_normalised
+sofa/train/sofa_0419_normalised
+sofa/train/sofa_0503_normalised
+sofa/train/sofa_0007_normalised
+sofa/train/sofa_0661_normalised
+sofa/train/sofa_0340_normalised
+sofa/train/sofa_0081_normalised
+sofa/train/sofa_0349_normalised
+sofa/train/sofa_0604_normalised
+sofa/train/sofa_0043_normalised
+sofa/train/sofa_0665_normalised
+sofa/train/sofa_0069_normalised
+sofa/train/sofa_0088_normalised
+sofa/train/sofa_0400_normalised
+sofa/train/sofa_0484_normalised
+sofa/train/sofa_0522_normalised
+sofa/train/sofa_0170_normalised
+sofa/train/sofa_0255_normalised
+sofa/train/sofa_0673_normalised
+sofa/train/sofa_0272_normalised
+sofa/train/sofa_0421_normalised
+sofa/train/sofa_0259_normalised
+sofa/train/sofa_0174_normalised
+sofa/train/sofa_0244_normalised
+sofa/train/sofa_0436_normalised
+sofa/train/sofa_0426_normalised
+sofa/train/sofa_0473_normalised
+sofa/train/sofa_0163_normalised
+sofa/train/sofa_0215_normalised
+sofa/train/sofa_0245_normalised
+sofa/train/sofa_0192_normalised
+sofa/train/sofa_0257_normalised
+sofa/train/sofa_0173_normalised
+sofa/train/sofa_0229_normalised
+sofa/train/sofa_0115_normalised
+sofa/train/sofa_0103_normalised
+sofa/train/sofa_0586_normalised
+sofa/train/sofa_0097_normalised
+sofa/train/sofa_0204_normalised
+sofa/train/sofa_0132_normalised
+sofa/train/sofa_0621_normalised
+sofa/train/sofa_0136_normalised
+sofa/train/sofa_0109_normalised
+sofa/train/sofa_0053_normalised
+sofa/train/sofa_0282_normalised
+sofa/train/sofa_0569_normalised
+sofa/train/sofa_0502_normalised
+sofa/train/sofa_0452_normalised
+sofa/train/sofa_0033_normalised
+sofa/train/sofa_0084_normalised
+sofa/train/sofa_0401_normalised
+sofa/train/sofa_0609_normalised
+sofa/train/sofa_0046_normalised
+sofa/train/sofa_0090_normalised
+sofa/train/sofa_0315_normalised
+sofa/train/sofa_0122_normalised
+sofa/train/sofa_0468_normalised
+sofa/train/sofa_0447_normalised
+sofa/train/sofa_0022_normalised
+sofa/train/sofa_0012_normalised
+sofa/train/sofa_0585_normalised
+sofa/train/sofa_0488_normalised
+sofa/train/sofa_0646_normalised
+sofa/train/sofa_0552_normalised
+sofa/train/sofa_0028_normalised
+sofa/train/sofa_0496_normalised
+sofa/train/sofa_0096_normalised
+sofa/train/sofa_0144_normalised
+sofa/train/sofa_0471_normalised
+sofa/train/sofa_0553_normalised
+sofa/train/sofa_0579_normalised
+sofa/train/sofa_0331_normalised
+sofa/train/sofa_0145_normalised
+sofa/train/sofa_0492_normalised
+sofa/train/sofa_0455_normalised
+sofa/train/sofa_0176_normalised
+sofa/train/sofa_0570_normalised
+sofa/train/sofa_0172_normalised
+sofa/train/sofa_0317_normalised
+sofa/train/sofa_0550_normalised
+sofa/train/sofa_0304_normalised
+sofa/train/sofa_0291_normalised
+sofa/train/sofa_0377_normalised
+sofa/train/sofa_0196_normalised
+sofa/train/sofa_0238_normalised
+sofa/train/sofa_0616_normalised
+sofa/train/sofa_0657_normalised
+sofa/train/sofa_0265_normalised
+sofa/train/sofa_0091_normalised
+sofa/train/sofa_0160_normalised
+sofa/train/sofa_0596_normalised
+sofa/train/sofa_0295_normalised
+sofa/train/sofa_0268_normalised
+sofa/train/sofa_0581_normalised
+sofa/train/sofa_0554_normalised
+sofa/train/sofa_0677_normalised
+sofa/train/sofa_0560_normalised
+sofa/train/sofa_0472_normalised
+sofa/train/sofa_0142_normalised
+sofa/train/sofa_0664_normalised
+sofa/train/sofa_0010_normalised
+sofa/train/sofa_0425_normalised
+sofa/train/sofa_0258_normalised
+sofa/train/sofa_0429_normalised
+sofa/train/sofa_0396_normalised
+sofa/train/sofa_0066_normalised
+sofa/train/sofa_0276_normalised
+sofa/train/sofa_0660_normalised
+sofa/train/sofa_0351_normalised
+sofa/train/sofa_0466_normalised
+sofa/train/sofa_0098_normalised
+sofa/train/sofa_0544_normalised
+sofa/train/sofa_0663_normalised
+sofa/train/sofa_0626_normalised
+sofa/train/sofa_0380_normalised
+sofa/train/sofa_0379_normalised
+sofa/train/sofa_0292_normalised
+sofa/train/sofa_0195_normalised
+sofa/train/sofa_0641_normalised
+sofa/train/sofa_0049_normalised
+sofa/train/sofa_0606_normalised
+sofa/train/sofa_0644_normalised
+sofa/train/sofa_0133_normalised
+sofa/train/sofa_0322_normalised
+sofa/train/sofa_0636_normalised
+sofa/train/sofa_0337_normalised
+sofa/train/sofa_0409_normalised
+sofa/train/sofa_0654_normalised
+sofa/train/sofa_0324_normalised
+sofa/train/sofa_0318_normalised
+sofa/train/sofa_0518_normalised
+sofa/train/sofa_0470_normalised
+sofa/train/sofa_0510_normalised
+sofa/train/sofa_0666_normalised
+sofa/train/sofa_0344_normalised
+sofa/train/sofa_0224_normalised
+sofa/train/sofa_0325_normalised
+sofa/train/sofa_0464_normalised
+sofa/train/sofa_0083_normalised
+sofa/train/sofa_0559_normalised
+sofa/train/sofa_0038_normalised
+sofa/train/sofa_0206_normalised
+sofa/train/sofa_0106_normalised
+sofa/train/sofa_0001_normalised
+sofa/train/sofa_0671_normalised
+sofa/train/sofa_0361_normalised
+sofa/train/sofa_0035_normalised
+sofa/train/sofa_0256_normalised
+sofa/train/sofa_0407_normalised
+sofa/train/sofa_0180_normalised
+sofa/train/sofa_0234_normalised
+sofa/train/sofa_0628_normalised
+sofa/train/sofa_0561_normalised
+sofa/train/sofa_0508_normalised
+sofa/train/sofa_0211_normalised
+sofa/train/sofa_0511_normalised
+sofa/train/sofa_0248_normalised
+sofa/train/sofa_0299_normalised
+sofa/train/sofa_0504_normalised
+sofa/train/sofa_0182_normalised
+sofa/train/sofa_0446_normalised
+sofa/train/sofa_0149_normalised
+sofa/train/sofa_0225_normalised
+sofa/train/sofa_0638_normalised
+sofa/train/sofa_0412_normalised
+sofa/train/sofa_0536_normalised
+sofa/train/sofa_0177_normalised
+sofa/train/sofa_0241_normalised
+sofa/train/sofa_0127_normalised
+sofa/train/sofa_0203_normalised
+sofa/train/sofa_0614_normalised
+sofa/train/sofa_0078_normalised
+sofa/train/sofa_0427_normalised
+sofa/train/sofa_0167_normalised
+sofa/train/sofa_0566_normalised
+sofa/train/sofa_0525_normalised
+sofa/train/sofa_0486_normalised
+sofa/train/sofa_0009_normalised
+sofa/train/sofa_0386_normalised
+sofa/train/sofa_0595_normalised
+sofa/train/sofa_0067_normalised
+sofa/train/sofa_0608_normalised
+sofa/train/sofa_0370_normalised
+sofa/train/sofa_0576_normalised
+sofa/train/sofa_0307_normalised
+sofa/train/sofa_0034_normalised
+sofa/train/sofa_0371_normalised
+sofa/train/sofa_0388_normalised
+sofa/train/sofa_0210_normalised
+sofa/train/sofa_0418_normalised
+sofa/train/sofa_0092_normalised
+sofa/train/sofa_0300_normalised
+sofa/train/sofa_0260_normalised
+sofa/train/sofa_0454_normalised
+sofa/train/sofa_0365_normalised
+sofa/train/sofa_0154_normalised
+sofa/train/sofa_0529_normalised
+sofa/train/sofa_0158_normalised
+sofa/train/sofa_0501_normalised
+sofa/train/sofa_0519_normalised
+sofa/train/sofa_0411_normalised
+sofa/train/sofa_0543_normalised
+sofa/train/sofa_0186_normalised
+sofa/train/sofa_0113_normalised
+sofa/train/sofa_0287_normalised
+sofa/train/sofa_0678_normalised
+sofa/train/sofa_0120_normalised
+sofa/train/sofa_0050_normalised
+sofa/train/sofa_0417_normalised
+sofa/train/sofa_0532_normalised
+sofa/train/sofa_0281_normalised
+sofa/train/sofa_0018_normalised
+sofa/train/sofa_0161_normalised
+sofa/train/sofa_0601_normalised
+sofa/train/sofa_0442_normalised
+sofa/train/sofa_0347_normalised
+sofa/train/sofa_0156_normalised
+sofa/train/sofa_0175_normalised
+sofa/train/sofa_0573_normalised
+sofa/train/sofa_0116_normalised
+sofa/train/sofa_0513_normalised
+sofa/train/sofa_0428_normalised
+sofa/train/sofa_0439_normalised
+sofa/train/sofa_0623_normalised
+sofa/train/sofa_0011_normalised
+sofa/train/sofa_0239_normalised
+sofa/train/sofa_0275_normalised
+sofa/train/sofa_0288_normalised
+sofa/train/sofa_0589_normalised
+sofa/train/sofa_0293_normalised
+sofa/train/sofa_0308_normalised
+sofa/train/sofa_0223_normalised
+sofa/train/sofa_0270_normalised
+sofa/train/sofa_0389_normalised
+sofa/train/sofa_0316_normalised
+sofa/train/sofa_0153_normalised
+sofa/train/sofa_0530_normalised
+sofa/train/sofa_0201_normalised
+sofa/train/sofa_0327_normalised
+sofa/train/sofa_0169_normalised
+sofa/train/sofa_0591_normalised
+sofa/train/sofa_0441_normalised
+sofa/train/sofa_0320_normalised
+sofa/train/sofa_0168_normalised
+sofa/train/sofa_0551_normalised
+sofa/train/sofa_0155_normalised
+sofa/train/sofa_0332_normalised
+sofa/train/sofa_0150_normalised
+sofa/train/sofa_0369_normalised
+sofa/train/sofa_0467_normalised
+sofa/train/sofa_0019_normalised
+sofa/train/sofa_0032_normalised
+sofa/train/sofa_0222_normalised
+sofa/train/sofa_0527_normalised
+sofa/train/sofa_0635_normalised
+sofa/train/sofa_0430_normalised
+sofa/train/sofa_0444_normalised
+sofa/train/sofa_0190_normalised
+sofa/train/sofa_0099_normalised
+sofa/train/sofa_0632_normalised
+sofa/train/sofa_0571_normalised
+sofa/train/sofa_0137_normalised
+sofa/train/sofa_0298_normalised
+sofa/train/sofa_0214_normalised
+sofa/train/sofa_0068_normalised
+sofa/train/sofa_0368_normalised
+sofa/train/sofa_0658_normalised
+sofa/train/sofa_0414_normalised
+sofa/train/sofa_0381_normalised
+sofa/train/sofa_0547_normalised
+sofa/train/sofa_0390_normalised
+sofa/train/sofa_0432_normalised
+sofa/train/sofa_0675_normalised
+sofa/train/sofa_0312_normalised
+sofa/train/sofa_0162_normalised
+sofa/train/sofa_0076_normalised
+sofa/train/sofa_0294_normalised
+sofa/train/sofa_0297_normalised
+sofa/train/sofa_0350_normalised
+sofa/train/sofa_0243_normalised
+sofa/train/sofa_0014_normalised
+sofa/train/sofa_0080_normalised
+sofa/train/sofa_0450_normalised
+sofa/train/sofa_0575_normalised
+sofa/train/sofa_0157_normalised
+sofa/train/sofa_0027_normalised
+sofa/train/sofa_0119_normalised
+sofa/train/sofa_0296_normalised
+sofa/train/sofa_0343_normalised
+sofa/train/sofa_0198_normalised
+sofa/train/sofa_0058_normalised
+sofa/train/sofa_0515_normalised
+sofa/train/sofa_0640_normalised
+sofa/test/sofa_0772_normalised
+sofa/test/sofa_0776_normalised
+sofa/test/sofa_0716_normalised
+sofa/test/sofa_0768_normalised
+sofa/test/sofa_0748_normalised
+sofa/test/sofa_0758_normalised
+sofa/test/sofa_0727_normalised
+sofa/test/sofa_0732_normalised
+sofa/test/sofa_0715_normalised
+sofa/test/sofa_0756_normalised
+sofa/test/sofa_0746_normalised
+sofa/test/sofa_0742_normalised
+sofa/test/sofa_0702_normalised
+sofa/test/sofa_0688_normalised
+sofa/test/sofa_0769_normalised
+sofa/test/sofa_0696_normalised
+sofa/test/sofa_0744_normalised
+sofa/test/sofa_0681_normalised
+sofa/test/sofa_0767_normalised
+sofa/test/sofa_0749_normalised
+sofa/test/sofa_0694_normalised
+sofa/test/sofa_0712_normalised
+sofa/test/sofa_0736_normalised
+sofa/test/sofa_0773_normalised
+sofa/test/sofa_0684_normalised
+sofa/test/sofa_0762_normalised
+sofa/test/sofa_0697_normalised
+sofa/test/sofa_0764_normalised
+sofa/test/sofa_0738_normalised
+sofa/test/sofa_0754_normalised
+sofa/test/sofa_0775_normalised
+sofa/test/sofa_0720_normalised
+sofa/test/sofa_0745_normalised
+sofa/test/sofa_0771_normalised
+sofa/test/sofa_0719_normalised
+sofa/test/sofa_0710_normalised
+sofa/test/sofa_0774_normalised
+sofa/test/sofa_0692_normalised
+sofa/test/sofa_0778_normalised
+sofa/test/sofa_0709_normalised
+sofa/test/sofa_0760_normalised
+sofa/test/sofa_0699_normalised
+sofa/test/sofa_0714_normalised
+sofa/test/sofa_0734_normalised
+sofa/test/sofa_0777_normalised
+sofa/test/sofa_0713_normalised
+sofa/test/sofa_0698_normalised
+sofa/test/sofa_0780_normalised
+sofa/test/sofa_0779_normalised
+sofa/test/sofa_0705_normalised
+sofa/test/sofa_0750_normalised
+sofa/test/sofa_0726_normalised
+sofa/test/sofa_0689_normalised
+sofa/test/sofa_0691_normalised
+sofa/test/sofa_0721_normalised
+sofa/test/sofa_0770_normalised
+sofa/test/sofa_0761_normalised
+sofa/test/sofa_0741_normalised
+sofa/test/sofa_0733_normalised
+sofa/test/sofa_0693_normalised
+sofa/test/sofa_0740_normalised
+sofa/test/sofa_0683_normalised
+sofa/test/sofa_0751_normalised
+sofa/test/sofa_0682_normalised
+sofa/test/sofa_0722_normalised
+sofa/test/sofa_0704_normalised
+sofa/test/sofa_0703_normalised
+sofa/test/sofa_0747_normalised
+sofa/test/sofa_0686_normalised
+sofa/test/sofa_0728_normalised
+sofa/test/sofa_0701_normalised
+sofa/test/sofa_0735_normalised
+sofa/test/sofa_0690_normalised
+sofa/test/sofa_0755_normalised
+sofa/test/sofa_0757_normalised
+sofa/test/sofa_0695_normalised
+sofa/test/sofa_0718_normalised
+sofa/test/sofa_0730_normalised
+sofa/test/sofa_0723_normalised
+sofa/test/sofa_0725_normalised
+sofa/test/sofa_0759_normalised
+sofa/test/sofa_0711_normalised
+sofa/test/sofa_0763_normalised
+sofa/test/sofa_0731_normalised
+sofa/test/sofa_0739_normalised
+sofa/test/sofa_0707_normalised
+sofa/test/sofa_0765_normalised
+sofa/test/sofa_0753_normalised
+sofa/test/sofa_0717_normalised
+sofa/test/sofa_0743_normalised
+sofa/test/sofa_0766_normalised
+sofa/test/sofa_0729_normalised
+sofa/test/sofa_0706_normalised
+sofa/test/sofa_0752_normalised
+sofa/test/sofa_0687_normalised
+sofa/test/sofa_0724_normalised
+sofa/test/sofa_0700_normalised
+sofa/test/sofa_0685_normalised
+sofa/test/sofa_0708_normalised
+sofa/test/sofa_0737_normalised
+wardrobe/train/wardrobe_0074_normalised
+wardrobe/train/wardrobe_0086_normalised
+wardrobe/train/wardrobe_0012_normalised
+wardrobe/train/wardrobe_0010_normalised
+wardrobe/train/wardrobe_0015_normalised
+wardrobe/train/wardrobe_0077_normalised
+wardrobe/train/wardrobe_0085_normalised
+wardrobe/train/wardrobe_0014_normalised
+wardrobe/train/wardrobe_0019_normalised
+wardrobe/train/wardrobe_0024_normalised
+wardrobe/train/wardrobe_0007_normalised
+wardrobe/train/wardrobe_0061_normalised
+wardrobe/train/wardrobe_0072_normalised
+wardrobe/train/wardrobe_0023_normalised
+wardrobe/train/wardrobe_0049_normalised
+wardrobe/train/wardrobe_0037_normalised
+wardrobe/train/wardrobe_0082_normalised
+wardrobe/train/wardrobe_0033_normalised
+wardrobe/train/wardrobe_0016_normalised
+wardrobe/train/wardrobe_0047_normalised
+wardrobe/train/wardrobe_0050_normalised
+wardrobe/train/wardrobe_0028_normalised
+wardrobe/train/wardrobe_0025_normalised
+wardrobe/train/wardrobe_0038_normalised
+wardrobe/train/wardrobe_0079_normalised
+wardrobe/train/wardrobe_0066_normalised
+wardrobe/train/wardrobe_0045_normalised
+wardrobe/train/wardrobe_0070_normalised
+wardrobe/train/wardrobe_0055_normalised
+wardrobe/train/wardrobe_0043_normalised
+wardrobe/train/wardrobe_0059_normalised
+wardrobe/train/wardrobe_0075_normalised
+wardrobe/train/wardrobe_0040_normalised
+wardrobe/train/wardrobe_0060_normalised
+wardrobe/train/wardrobe_0056_normalised
+wardrobe/train/wardrobe_0002_normalised
+wardrobe/train/wardrobe_0036_normalised
+wardrobe/train/wardrobe_0084_normalised
+wardrobe/train/wardrobe_0022_normalised
+wardrobe/train/wardrobe_0062_normalised
+wardrobe/train/wardrobe_0065_normalised
+wardrobe/train/wardrobe_0001_normalised
+wardrobe/train/wardrobe_0013_normalised
+wardrobe/train/wardrobe_0020_normalised
+wardrobe/train/wardrobe_0021_normalised
+wardrobe/train/wardrobe_0034_normalised
+wardrobe/train/wardrobe_0017_normalised
+wardrobe/train/wardrobe_0067_normalised
+wardrobe/train/wardrobe_0026_normalised
+wardrobe/train/wardrobe_0004_normalised
+wardrobe/train/wardrobe_0054_normalised
+wardrobe/train/wardrobe_0032_normalised
+wardrobe/train/wardrobe_0081_normalised
+wardrobe/train/wardrobe_0027_normalised
+wardrobe/train/wardrobe_0078_normalised
+wardrobe/train/wardrobe_0058_normalised
+wardrobe/train/wardrobe_0057_normalised
+wardrobe/train/wardrobe_0046_normalised
+wardrobe/train/wardrobe_0052_normalised
+wardrobe/train/wardrobe_0005_normalised
+wardrobe/train/wardrobe_0029_normalised
+wardrobe/train/wardrobe_0053_normalised
+wardrobe/train/wardrobe_0018_normalised
+wardrobe/train/wardrobe_0006_normalised
+wardrobe/train/wardrobe_0064_normalised
+wardrobe/train/wardrobe_0030_normalised
+wardrobe/train/wardrobe_0051_normalised
+wardrobe/train/wardrobe_0073_normalised
+wardrobe/train/wardrobe_0009_normalised
+wardrobe/train/wardrobe_0003_normalised
+wardrobe/train/wardrobe_0048_normalised
+wardrobe/train/wardrobe_0044_normalised
+wardrobe/train/wardrobe_0071_normalised
+wardrobe/train/wardrobe_0031_normalised
+wardrobe/train/wardrobe_0042_normalised
+wardrobe/train/wardrobe_0035_normalised
+wardrobe/train/wardrobe_0011_normalised
+wardrobe/train/wardrobe_0080_normalised
+wardrobe/train/wardrobe_0087_normalised
+wardrobe/train/wardrobe_0039_normalised
+wardrobe/train/wardrobe_0068_normalised
+wardrobe/train/wardrobe_0041_normalised
+wardrobe/train/wardrobe_0083_normalised
+wardrobe/train/wardrobe_0008_normalised
+wardrobe/train/wardrobe_0076_normalised
+wardrobe/train/wardrobe_0069_normalised
+wardrobe/train/wardrobe_0063_normalised
+wardrobe/test/wardrobe_0091_normalised
+wardrobe/test/wardrobe_0094_normalised
+wardrobe/test/wardrobe_0099_normalised
+wardrobe/test/wardrobe_0101_normalised
+wardrobe/test/wardrobe_0096_normalised
+wardrobe/test/wardrobe_0100_normalised
+wardrobe/test/wardrobe_0104_normalised
+wardrobe/test/wardrobe_0092_normalised
+wardrobe/test/wardrobe_0090_normalised
+wardrobe/test/wardrobe_0093_normalised
+wardrobe/test/wardrobe_0097_normalised
+wardrobe/test/wardrobe_0098_normalised
+wardrobe/test/wardrobe_0088_normalised
+wardrobe/test/wardrobe_0105_normalised
+wardrobe/test/wardrobe_0095_normalised
+wardrobe/test/wardrobe_0103_normalised
+wardrobe/test/wardrobe_0106_normalised
+wardrobe/test/wardrobe_0089_normalised
+wardrobe/test/wardrobe_0107_normalised
+wardrobe/test/wardrobe_0102_normalised
+radio/train/radio_0022_normalised
+radio/train/radio_0072_normalised
+radio/train/radio_0042_normalised
+radio/train/radio_0060_normalised
+radio/train/radio_0078_normalised
+radio/train/radio_0047_normalised
+radio/train/radio_0013_normalised
+radio/train/radio_0057_normalised
+radio/train/radio_0053_normalised
+radio/train/radio_0011_normalised
+radio/train/radio_0069_normalised
+radio/train/radio_0050_normalised
+radio/train/radio_0081_normalised
+radio/train/radio_0071_normalised
+radio/train/radio_0064_normalised
+radio/train/radio_0087_normalised
+radio/train/radio_0076_normalised
+radio/train/radio_0009_normalised
+radio/train/radio_0018_normalised
+radio/train/radio_0096_normalised
+radio/train/radio_0049_normalised
+radio/train/radio_0093_normalised
+radio/train/radio_0017_normalised
+radio/train/radio_0027_normalised
+radio/train/radio_0070_normalised
+radio/train/radio_0051_normalised
+radio/train/radio_0065_normalised
+radio/train/radio_0073_normalised
+radio/train/radio_0041_normalised
+radio/train/radio_0068_normalised
+radio/train/radio_0037_normalised
+radio/train/radio_0010_normalised
+radio/train/radio_0089_normalised
+radio/train/radio_0092_normalised
+radio/train/radio_0094_normalised
+radio/train/radio_0025_normalised
+radio/train/radio_0036_normalised
+radio/train/radio_0062_normalised
+radio/train/radio_0035_normalised
+radio/train/radio_0032_normalised
+radio/train/radio_0012_normalised
+radio/train/radio_0067_normalised
+radio/train/radio_0052_normalised
+radio/train/radio_0014_normalised
+radio/train/radio_0034_normalised
+radio/train/radio_0088_normalised
+radio/train/radio_0048_normalised
+radio/train/radio_0005_normalised
+radio/train/radio_0100_normalised
+radio/train/radio_0055_normalised
+radio/train/radio_0075_normalised
+radio/train/radio_0084_normalised
+radio/train/radio_0097_normalised
+radio/train/radio_0061_normalised
+radio/train/radio_0024_normalised
+radio/train/radio_0045_normalised
+radio/train/radio_0031_normalised
+radio/train/radio_0002_normalised
+radio/train/radio_0038_normalised
+radio/train/radio_0020_normalised
+radio/train/radio_0079_normalised
+radio/train/radio_0026_normalised
+radio/train/radio_0101_normalised
+radio/train/radio_0040_normalised
+radio/train/radio_0044_normalised
+radio/train/radio_0015_normalised
+radio/train/radio_0063_normalised
+radio/train/radio_0016_normalised
+radio/train/radio_0023_normalised
+radio/train/radio_0104_normalised
+radio/train/radio_0029_normalised
+radio/train/radio_0056_normalised
+radio/train/radio_0030_normalised
+radio/train/radio_0043_normalised
+radio/train/radio_0028_normalised
+radio/train/radio_0006_normalised
+radio/train/radio_0082_normalised
+radio/train/radio_0086_normalised
+radio/train/radio_0008_normalised
+radio/train/radio_0091_normalised
+radio/train/radio_0095_normalised
+radio/train/radio_0083_normalised
+radio/train/radio_0004_normalised
+radio/train/radio_0001_normalised
+radio/train/radio_0066_normalised
+radio/train/radio_0102_normalised
+radio/train/radio_0033_normalised
+radio/train/radio_0080_normalised
+radio/train/radio_0090_normalised
+radio/train/radio_0021_normalised
+radio/train/radio_0074_normalised
+radio/train/radio_0007_normalised
+radio/train/radio_0103_normalised
+radio/train/radio_0077_normalised
+radio/train/radio_0099_normalised
+radio/train/radio_0085_normalised
+radio/train/radio_0098_normalised
+radio/train/radio_0058_normalised
+radio/train/radio_0039_normalised
+radio/train/radio_0019_normalised
+radio/train/radio_0059_normalised
+radio/train/radio_0054_normalised
+radio/train/radio_0046_normalised
+radio/train/radio_0003_normalised
+radio/test/radio_0107_normalised
+radio/test/radio_0105_normalised
+radio/test/radio_0122_normalised
+radio/test/radio_0118_normalised
+radio/test/radio_0120_normalised
+radio/test/radio_0115_normalised
+radio/test/radio_0110_normalised
+radio/test/radio_0111_normalised
+radio/test/radio_0112_normalised
+radio/test/radio_0106_normalised
+radio/test/radio_0117_normalised
+radio/test/radio_0119_normalised
+radio/test/radio_0116_normalised
+radio/test/radio_0113_normalised
+radio/test/radio_0124_normalised
+radio/test/radio_0121_normalised
+radio/test/radio_0109_normalised
+radio/test/radio_0123_normalised
+radio/test/radio_0114_normalised
+radio/test/radio_0108_normalised
+desk/train/desk_0186_normalised
+desk/train/desk_0118_normalised
+desk/train/desk_0033_normalised
+desk/train/desk_0150_normalised
+desk/train/desk_0134_normalised
+desk/train/desk_0108_normalised
+desk/train/desk_0054_normalised
+desk/train/desk_0135_normalised
+desk/train/desk_0085_normalised
+desk/train/desk_0195_normalised
+desk/train/desk_0055_normalised
+desk/train/desk_0151_normalised
+desk/train/desk_0167_normalised
+desk/train/desk_0194_normalised
+desk/train/desk_0073_normalised
+desk/train/desk_0072_normalised
+desk/train/desk_0007_normalised
+desk/train/desk_0193_normalised
+desk/train/desk_0173_normalised
+desk/train/desk_0096_normalised
+desk/train/desk_0030_normalised
+desk/train/desk_0017_normalised
+desk/train/desk_0112_normalised
+desk/train/desk_0076_normalised
+desk/train/desk_0180_normalised
+desk/train/desk_0149_normalised
+desk/train/desk_0025_normalised
+desk/train/desk_0058_normalised
+desk/train/desk_0046_normalised
+desk/train/desk_0075_normalised
+desk/train/desk_0120_normalised
+desk/train/desk_0015_normalised
+desk/train/desk_0115_normalised
+desk/train/desk_0061_normalised
+desk/train/desk_0140_normalised
+desk/train/desk_0021_normalised
+desk/train/desk_0121_normalised
+desk/train/desk_0141_normalised
+desk/train/desk_0164_normalised
+desk/train/desk_0091_normalised
+desk/train/desk_0142_normalised
+desk/train/desk_0041_normalised
+desk/train/desk_0093_normalised
+desk/train/desk_0089_normalised
+desk/train/desk_0138_normalised
+desk/train/desk_0044_normalised
+desk/train/desk_0132_normalised
+desk/train/desk_0047_normalised
+desk/train/desk_0152_normalised
+desk/train/desk_0080_normalised
+desk/train/desk_0040_normalised
+desk/train/desk_0029_normalised
+desk/train/desk_0181_normalised
+desk/train/desk_0156_normalised
+desk/train/desk_0070_normalised
+desk/train/desk_0069_normalised
+desk/train/desk_0155_normalised
+desk/train/desk_0063_normalised
+desk/train/desk_0146_normalised
+desk/train/desk_0147_normalised
+desk/train/desk_0107_normalised
+desk/train/desk_0159_normalised
+desk/train/desk_0060_normalised
+desk/train/desk_0162_normalised
+desk/train/desk_0187_normalised
+desk/train/desk_0067_normalised
+desk/train/desk_0011_normalised
+desk/train/desk_0090_normalised
+desk/train/desk_0006_normalised
+desk/train/desk_0013_normalised
+desk/train/desk_0008_normalised
+desk/train/desk_0144_normalised
+desk/train/desk_0192_normalised
+desk/train/desk_0074_normalised
+desk/train/desk_0104_normalised
+desk/train/desk_0010_normalised
+desk/train/desk_0016_normalised
+desk/train/desk_0071_normalised
+desk/train/desk_0053_normalised
+desk/train/desk_0145_normalised
+desk/train/desk_0003_normalised
+desk/train/desk_0028_normalised
+desk/train/desk_0139_normalised
+desk/train/desk_0137_normalised
+desk/train/desk_0117_normalised
+desk/train/desk_0099_normalised
+desk/train/desk_0189_normalised
+desk/train/desk_0034_normalised
+desk/train/desk_0157_normalised
+desk/train/desk_0051_normalised
+desk/train/desk_0166_normalised
+desk/train/desk_0042_normalised
+desk/train/desk_0065_normalised
+desk/train/desk_0116_normalised
+desk/train/desk_0178_normalised
+desk/train/desk_0057_normalised
+desk/train/desk_0123_normalised
+desk/train/desk_0056_normalised
+desk/train/desk_0004_normalised
+desk/train/desk_0161_normalised
+desk/train/desk_0185_normalised
+desk/train/desk_0168_normalised
+desk/train/desk_0066_normalised
+desk/train/desk_0174_normalised
+desk/train/desk_0172_normalised
+desk/train/desk_0027_normalised
+desk/train/desk_0111_normalised
+desk/train/desk_0095_normalised
+desk/train/desk_0110_normalised
+desk/train/desk_0024_normalised
+desk/train/desk_0086_normalised
+desk/train/desk_0133_normalised
+desk/train/desk_0188_normalised
+desk/train/desk_0052_normalised
+desk/train/desk_0154_normalised
+desk/train/desk_0128_normalised
+desk/train/desk_0098_normalised
+desk/train/desk_0169_normalised
+desk/train/desk_0196_normalised
+desk/train/desk_0002_normalised
+desk/train/desk_0106_normalised
+desk/train/desk_0198_normalised
+desk/train/desk_0023_normalised
+desk/train/desk_0020_normalised
+desk/train/desk_0005_normalised
+desk/train/desk_0026_normalised
+desk/train/desk_0114_normalised
+desk/train/desk_0190_normalised
+desk/train/desk_0082_normalised
+desk/train/desk_0163_normalised
+desk/train/desk_0200_normalised
+desk/train/desk_0126_normalised
+desk/train/desk_0177_normalised
+desk/train/desk_0009_normalised
+desk/train/desk_0045_normalised
+desk/train/desk_0038_normalised
+desk/train/desk_0092_normalised
+desk/train/desk_0131_normalised
+desk/train/desk_0001_normalised
+desk/train/desk_0083_normalised
+desk/train/desk_0059_normalised
+desk/train/desk_0102_normalised
+desk/train/desk_0100_normalised
+desk/train/desk_0018_normalised
+desk/train/desk_0014_normalised
+desk/train/desk_0032_normalised
+desk/train/desk_0165_normalised
+desk/train/desk_0048_normalised
+desk/train/desk_0022_normalised
+desk/train/desk_0077_normalised
+desk/train/desk_0068_normalised
+desk/train/desk_0122_normalised
+desk/train/desk_0019_normalised
+desk/train/desk_0160_normalised
+desk/train/desk_0136_normalised
+desk/train/desk_0050_normalised
+desk/train/desk_0153_normalised
+desk/train/desk_0191_normalised
+desk/train/desk_0143_normalised
+desk/train/desk_0094_normalised
+desk/train/desk_0199_normalised
+desk/train/desk_0012_normalised
+desk/train/desk_0097_normalised
+desk/train/desk_0037_normalised
+desk/train/desk_0062_normalised
+desk/train/desk_0039_normalised
+desk/train/desk_0158_normalised
+desk/train/desk_0049_normalised
+desk/train/desk_0130_normalised
+desk/train/desk_0182_normalised
+desk/train/desk_0064_normalised
+desk/train/desk_0125_normalised
+desk/train/desk_0129_normalised
+desk/train/desk_0170_normalised
+desk/train/desk_0084_normalised
+desk/train/desk_0109_normalised
+desk/train/desk_0148_normalised
+desk/train/desk_0079_normalised
+desk/train/desk_0124_normalised
+desk/train/desk_0043_normalised
+desk/train/desk_0197_normalised
+desk/train/desk_0087_normalised
+desk/train/desk_0088_normalised
+desk/train/desk_0031_normalised
+desk/train/desk_0081_normalised
+desk/train/desk_0113_normalised
+desk/train/desk_0103_normalised
+desk/train/desk_0127_normalised
+desk/train/desk_0036_normalised
+desk/train/desk_0176_normalised
+desk/train/desk_0175_normalised
+desk/train/desk_0101_normalised
+desk/train/desk_0171_normalised
+desk/train/desk_0119_normalised
+desk/train/desk_0105_normalised
+desk/train/desk_0035_normalised
+desk/train/desk_0179_normalised
+desk/train/desk_0078_normalised
+desk/train/desk_0184_normalised
+desk/train/desk_0183_normalised
+desk/test/desk_0284_normalised
+desk/test/desk_0207_normalised
+desk/test/desk_0206_normalised
+desk/test/desk_0225_normalised
+desk/test/desk_0232_normalised
+desk/test/desk_0233_normalised
+desk/test/desk_0230_normalised
+desk/test/desk_0286_normalised
+desk/test/desk_0215_normalised
+desk/test/desk_0266_normalised
+desk/test/desk_0265_normalised
+desk/test/desk_0223_normalised
+desk/test/desk_0231_normalised
+desk/test/desk_0242_normalised
+desk/test/desk_0262_normalised
+desk/test/desk_0282_normalised
+desk/test/desk_0210_normalised
+desk/test/desk_0213_normalised
+desk/test/desk_0261_normalised
+desk/test/desk_0280_normalised
+desk/test/desk_0276_normalised
+desk/test/desk_0243_normalised
+desk/test/desk_0221_normalised
+desk/test/desk_0235_normalised
+desk/test/desk_0249_normalised
+desk/test/desk_0205_normalised
+desk/test/desk_0267_normalised
+desk/test/desk_0256_normalised
+desk/test/desk_0255_normalised
+desk/test/desk_0278_normalised
+desk/test/desk_0229_normalised
+desk/test/desk_0245_normalised
+desk/test/desk_0250_normalised
+desk/test/desk_0239_normalised
+desk/test/desk_0263_normalised
+desk/test/desk_0244_normalised
+desk/test/desk_0271_normalised
+desk/test/desk_0264_normalised
+desk/test/desk_0257_normalised
+desk/test/desk_0202_normalised
+desk/test/desk_0279_normalised
+desk/test/desk_0252_normalised
+desk/test/desk_0238_normalised
+desk/test/desk_0220_normalised
+desk/test/desk_0237_normalised
+desk/test/desk_0277_normalised
+desk/test/desk_0224_normalised
+desk/test/desk_0227_normalised
+desk/test/desk_0272_normalised
+desk/test/desk_0240_normalised
+desk/test/desk_0234_normalised
+desk/test/desk_0273_normalised
+desk/test/desk_0269_normalised
+desk/test/desk_0254_normalised
+desk/test/desk_0283_normalised
+desk/test/desk_0260_normalised
+desk/test/desk_0217_normalised
+desk/test/desk_0209_normalised
+desk/test/desk_0241_normalised
+desk/test/desk_0204_normalised
+desk/test/desk_0247_normalised
+desk/test/desk_0222_normalised
+desk/test/desk_0253_normalised
+desk/test/desk_0219_normalised
+desk/test/desk_0251_normalised
+desk/test/desk_0208_normalised
+desk/test/desk_0248_normalised
+desk/test/desk_0281_normalised
+desk/test/desk_0285_normalised
+desk/test/desk_0246_normalised
+desk/test/desk_0228_normalised
+desk/test/desk_0270_normalised
+desk/test/desk_0216_normalised
+desk/test/desk_0226_normalised
+desk/test/desk_0211_normalised
+desk/test/desk_0259_normalised
+desk/test/desk_0236_normalised
+desk/test/desk_0258_normalised
+desk/test/desk_0275_normalised
+desk/test/desk_0212_normalised
+desk/test/desk_0201_normalised
+desk/test/desk_0268_normalised
+desk/test/desk_0218_normalised
+desk/test/desk_0203_normalised
+desk/test/desk_0274_normalised
+desk/test/desk_0214_normalised
+bench/train/bench_0063_normalised
+bench/train/bench_0144_normalised
+bench/train/bench_0019_normalised
+bench/train/bench_0105_normalised
+bench/train/bench_0168_normalised
+bench/train/bench_0031_normalised
+bench/train/bench_0150_normalised
+bench/train/bench_0037_normalised
+bench/train/bench_0104_normalised
+bench/train/bench_0092_normalised
+bench/train/bench_0064_normalised
+bench/train/bench_0161_normalised
+bench/train/bench_0079_normalised
+bench/train/bench_0044_normalised
+bench/train/bench_0159_normalised
+bench/train/bench_0007_normalised
+bench/train/bench_0084_normalised
+bench/train/bench_0162_normalised
+bench/train/bench_0123_normalised
+bench/train/bench_0001_normalised
+bench/train/bench_0032_normalised
+bench/train/bench_0076_normalised
+bench/train/bench_0061_normalised
+bench/train/bench_0110_normalised
+bench/train/bench_0028_normalised
+bench/train/bench_0027_normalised
+bench/train/bench_0067_normalised
+bench/train/bench_0117_normalised
+bench/train/bench_0033_normalised
+bench/train/bench_0109_normalised
+bench/train/bench_0060_normalised
+bench/train/bench_0021_normalised
+bench/train/bench_0081_normalised
+bench/train/bench_0003_normalised
+bench/train/bench_0170_normalised
+bench/train/bench_0121_normalised
+bench/train/bench_0077_normalised
+bench/train/bench_0053_normalised
+bench/train/bench_0035_normalised
+bench/train/bench_0038_normalised
+bench/train/bench_0062_normalised
+bench/train/bench_0074_normalised
+bench/train/bench_0142_normalised
+bench/train/bench_0139_normalised
+bench/train/bench_0154_normalised
+bench/train/bench_0112_normalised
+bench/train/bench_0138_normalised
+bench/train/bench_0041_normalised
+bench/train/bench_0029_normalised
+bench/train/bench_0127_normalised
+bench/train/bench_0120_normalised
+bench/train/bench_0030_normalised
+bench/train/bench_0080_normalised
+bench/train/bench_0111_normalised
+bench/train/bench_0141_normalised
+bench/train/bench_0039_normalised
+bench/train/bench_0075_normalised
+bench/train/bench_0164_normalised
+bench/train/bench_0069_normalised
+bench/train/bench_0088_normalised
+bench/train/bench_0136_normalised
+bench/train/bench_0086_normalised
+bench/train/bench_0051_normalised
+bench/train/bench_0114_normalised
+bench/train/bench_0129_normalised
+bench/train/bench_0113_normalised
+bench/train/bench_0101_normalised
+bench/train/bench_0010_normalised
+bench/train/bench_0128_normalised
+bench/train/bench_0055_normalised
+bench/train/bench_0025_normalised
+bench/train/bench_0135_normalised
+bench/train/bench_0071_normalised
+bench/train/bench_0146_normalised
+bench/train/bench_0002_normalised
+bench/train/bench_0026_normalised
+bench/train/bench_0005_normalised
+bench/train/bench_0068_normalised
+bench/train/bench_0169_normalised
+bench/train/bench_0148_normalised
+bench/train/bench_0022_normalised
+bench/train/bench_0059_normalised
+bench/train/bench_0158_normalised
+bench/train/bench_0049_normalised
+bench/train/bench_0100_normalised
+bench/train/bench_0057_normalised
+bench/train/bench_0134_normalised
+bench/train/bench_0004_normalised
+bench/train/bench_0133_normalised
+bench/train/bench_0153_normalised
+bench/train/bench_0118_normalised
+bench/train/bench_0045_normalised
+bench/train/bench_0165_normalised
+bench/train/bench_0006_normalised
+bench/train/bench_0107_normalised
+bench/train/bench_0125_normalised
+bench/train/bench_0155_normalised
+bench/train/bench_0151_normalised
+bench/train/bench_0008_normalised
+bench/train/bench_0157_normalised
+bench/train/bench_0073_normalised
+bench/train/bench_0012_normalised
+bench/train/bench_0172_normalised
+bench/train/bench_0013_normalised
+bench/train/bench_0137_normalised
+bench/train/bench_0108_normalised
+bench/train/bench_0023_normalised
+bench/train/bench_0095_normalised
+bench/train/bench_0072_normalised
+bench/train/bench_0089_normalised
+bench/train/bench_0091_normalised
+bench/train/bench_0046_normalised
+bench/train/bench_0147_normalised
+bench/train/bench_0098_normalised
+bench/train/bench_0016_normalised
+bench/train/bench_0011_normalised
+bench/train/bench_0152_normalised
+bench/train/bench_0160_normalised
+bench/train/bench_0103_normalised
+bench/train/bench_0082_normalised
+bench/train/bench_0171_normalised
+bench/train/bench_0042_normalised
+bench/train/bench_0099_normalised
+bench/train/bench_0078_normalised
+bench/train/bench_0050_normalised
+bench/train/bench_0173_normalised
+bench/train/bench_0102_normalised
+bench/train/bench_0052_normalised
+bench/train/bench_0167_normalised
+bench/train/bench_0131_normalised
+bench/train/bench_0149_normalised
+bench/train/bench_0070_normalised
+bench/train/bench_0119_normalised
+bench/train/bench_0058_normalised
+bench/train/bench_0126_normalised
+bench/train/bench_0017_normalised
+bench/train/bench_0036_normalised
+bench/train/bench_0093_normalised
+bench/train/bench_0065_normalised
+bench/train/bench_0143_normalised
+bench/train/bench_0106_normalised
+bench/train/bench_0048_normalised
+bench/train/bench_0020_normalised
+bench/train/bench_0115_normalised
+bench/train/bench_0015_normalised
+bench/train/bench_0122_normalised
+bench/train/bench_0094_normalised
+bench/train/bench_0083_normalised
+bench/train/bench_0116_normalised
+bench/train/bench_0132_normalised
+bench/train/bench_0040_normalised
+bench/train/bench_0054_normalised
+bench/train/bench_0163_normalised
+bench/train/bench_0018_normalised
+bench/train/bench_0047_normalised
+bench/train/bench_0066_normalised
+bench/train/bench_0156_normalised
+bench/train/bench_0043_normalised
+bench/train/bench_0056_normalised
+bench/train/bench_0090_normalised
+bench/train/bench_0087_normalised
+bench/train/bench_0085_normalised
+bench/train/bench_0024_normalised
+bench/train/bench_0009_normalised
+bench/train/bench_0124_normalised
+bench/train/bench_0014_normalised
+bench/train/bench_0097_normalised
+bench/train/bench_0096_normalised
+bench/train/bench_0166_normalised
+bench/train/bench_0034_normalised
+bench/train/bench_0140_normalised
+bench/train/bench_0130_normalised
+bench/train/bench_0145_normalised
+bench/test/bench_0185_normalised
+bench/test/bench_0187_normalised
+bench/test/bench_0188_normalised
+bench/test/bench_0175_normalised
+bench/test/bench_0191_normalised
+bench/test/bench_0177_normalised
+bench/test/bench_0189_normalised
+bench/test/bench_0193_normalised
+bench/test/bench_0184_normalised
+bench/test/bench_0181_normalised
+bench/test/bench_0178_normalised
+bench/test/bench_0183_normalised
+bench/test/bench_0192_normalised
+bench/test/bench_0182_normalised
+bench/test/bench_0186_normalised
+bench/test/bench_0174_normalised
+bench/test/bench_0176_normalised
+bench/test/bench_0180_normalised
+bench/test/bench_0190_normalised
+bench/test/bench_0179_normalised
+glass_box/train/glass_box_0102_normalised
+glass_box/train/glass_box_0067_normalised
+glass_box/train/glass_box_0097_normalised
+glass_box/train/glass_box_0164_normalised
+glass_box/train/glass_box_0129_normalised
+glass_box/train/glass_box_0028_normalised
+glass_box/train/glass_box_0146_normalised
+glass_box/train/glass_box_0014_normalised
+glass_box/train/glass_box_0095_normalised
+glass_box/train/glass_box_0110_normalised
+glass_box/train/glass_box_0012_normalised
+glass_box/train/glass_box_0112_normalised
+glass_box/train/glass_box_0064_normalised
+glass_box/train/glass_box_0143_normalised
+glass_box/train/glass_box_0073_normalised
+glass_box/train/glass_box_0091_normalised
+glass_box/train/glass_box_0034_normalised
+glass_box/train/glass_box_0043_normalised
+glass_box/train/glass_box_0153_normalised
+glass_box/train/glass_box_0145_normalised
+glass_box/train/glass_box_0054_normalised
+glass_box/train/glass_box_0128_normalised
+glass_box/train/glass_box_0055_normalised
+glass_box/train/glass_box_0010_normalised
+glass_box/train/glass_box_0045_normalised
+glass_box/train/glass_box_0025_normalised
+glass_box/train/glass_box_0044_normalised
+glass_box/train/glass_box_0060_normalised
+glass_box/train/glass_box_0160_normalised
+glass_box/train/glass_box_0011_normalised
+glass_box/train/glass_box_0133_normalised
+glass_box/train/glass_box_0068_normalised
+glass_box/train/glass_box_0047_normalised
+glass_box/train/glass_box_0116_normalised
+glass_box/train/glass_box_0070_normalised
+glass_box/train/glass_box_0057_normalised
+glass_box/train/glass_box_0168_normalised
+glass_box/train/glass_box_0032_normalised
+glass_box/train/glass_box_0078_normalised
+glass_box/train/glass_box_0001_normalised
+glass_box/train/glass_box_0086_normalised
+glass_box/train/glass_box_0120_normalised
+glass_box/train/glass_box_0131_normalised
+glass_box/train/glass_box_0138_normalised
+glass_box/train/glass_box_0111_normalised
+glass_box/train/glass_box_0050_normalised
+glass_box/train/glass_box_0104_normalised
+glass_box/train/glass_box_0135_normalised
+glass_box/train/glass_box_0088_normalised
+glass_box/train/glass_box_0109_normalised
+glass_box/train/glass_box_0121_normalised
+glass_box/train/glass_box_0106_normalised
+glass_box/train/glass_box_0094_normalised
+glass_box/train/glass_box_0136_normalised
+glass_box/train/glass_box_0051_normalised
+glass_box/train/glass_box_0006_normalised
+glass_box/train/glass_box_0130_normalised
+glass_box/train/glass_box_0027_normalised
+glass_box/train/glass_box_0161_normalised
+glass_box/train/glass_box_0148_normalised
+glass_box/train/glass_box_0018_normalised
+glass_box/train/glass_box_0020_normalised
+glass_box/train/glass_box_0141_normalised
+glass_box/train/glass_box_0167_normalised
+glass_box/train/glass_box_0035_normalised
+glass_box/train/glass_box_0132_normalised
+glass_box/train/glass_box_0118_normalised
+glass_box/train/glass_box_0125_normalised
+glass_box/train/glass_box_0056_normalised
+glass_box/train/glass_box_0037_normalised
+glass_box/train/glass_box_0165_normalised
+glass_box/train/glass_box_0009_normalised
+glass_box/train/glass_box_0147_normalised
+glass_box/train/glass_box_0126_normalised
+glass_box/train/glass_box_0123_normalised
+glass_box/train/glass_box_0149_normalised
+glass_box/train/glass_box_0085_normalised
+glass_box/train/glass_box_0092_normalised
+glass_box/train/glass_box_0157_normalised
+glass_box/train/glass_box_0099_normalised
+glass_box/train/glass_box_0005_normalised
+glass_box/train/glass_box_0049_normalised
+glass_box/train/glass_box_0140_normalised
+glass_box/train/glass_box_0053_normalised
+glass_box/train/glass_box_0019_normalised
+glass_box/train/glass_box_0156_normalised
+glass_box/train/glass_box_0058_normalised
+glass_box/train/glass_box_0062_normalised
+glass_box/train/glass_box_0013_normalised
+glass_box/train/glass_box_0087_normalised
+glass_box/train/glass_box_0038_normalised
+glass_box/train/glass_box_0096_normalised
+glass_box/train/glass_box_0040_normalised
+glass_box/train/glass_box_0101_normalised
+glass_box/train/glass_box_0036_normalised
+glass_box/train/glass_box_0021_normalised
+glass_box/train/glass_box_0003_normalised
+glass_box/train/glass_box_0100_normalised
+glass_box/train/glass_box_0155_normalised
+glass_box/train/glass_box_0124_normalised
+glass_box/train/glass_box_0079_normalised
+glass_box/train/glass_box_0139_normalised
+glass_box/train/glass_box_0127_normalised
+glass_box/train/glass_box_0016_normalised
+glass_box/train/glass_box_0090_normalised
+glass_box/train/glass_box_0084_normalised
+glass_box/train/glass_box_0075_normalised
+glass_box/train/glass_box_0115_normalised
+glass_box/train/glass_box_0082_normalised
+glass_box/train/glass_box_0048_normalised
+glass_box/train/glass_box_0076_normalised
+glass_box/train/glass_box_0066_normalised
+glass_box/train/glass_box_0170_normalised
+glass_box/train/glass_box_0105_normalised
+glass_box/train/glass_box_0113_normalised
+glass_box/train/glass_box_0029_normalised
+glass_box/train/glass_box_0030_normalised
+glass_box/train/glass_box_0142_normalised
+glass_box/train/glass_box_0083_normalised
+glass_box/train/glass_box_0004_normalised
+glass_box/train/glass_box_0144_normalised
+glass_box/train/glass_box_0152_normalised
+glass_box/train/glass_box_0081_normalised
+glass_box/train/glass_box_0151_normalised
+glass_box/train/glass_box_0169_normalised
+glass_box/train/glass_box_0046_normalised
+glass_box/train/glass_box_0089_normalised
+glass_box/train/glass_box_0134_normalised
+glass_box/train/glass_box_0069_normalised
+glass_box/train/glass_box_0158_normalised
+glass_box/train/glass_box_0166_normalised
+glass_box/train/glass_box_0002_normalised
+glass_box/train/glass_box_0162_normalised
+glass_box/train/glass_box_0119_normalised
+glass_box/train/glass_box_0017_normalised
+glass_box/train/glass_box_0098_normalised
+glass_box/train/glass_box_0072_normalised
+glass_box/train/glass_box_0077_normalised
+glass_box/train/glass_box_0026_normalised
+glass_box/train/glass_box_0071_normalised
+glass_box/train/glass_box_0107_normalised
+glass_box/train/glass_box_0042_normalised
+glass_box/train/glass_box_0015_normalised
+glass_box/train/glass_box_0039_normalised
+glass_box/train/glass_box_0052_normalised
+glass_box/train/glass_box_0093_normalised
+glass_box/train/glass_box_0033_normalised
+glass_box/train/glass_box_0063_normalised
+glass_box/train/glass_box_0150_normalised
+glass_box/train/glass_box_0074_normalised
+glass_box/train/glass_box_0171_normalised
+glass_box/train/glass_box_0024_normalised
+glass_box/train/glass_box_0041_normalised
+glass_box/train/glass_box_0103_normalised
+glass_box/train/glass_box_0007_normalised
+glass_box/train/glass_box_0114_normalised
+glass_box/train/glass_box_0108_normalised
+glass_box/train/glass_box_0122_normalised
+glass_box/train/glass_box_0022_normalised
+glass_box/train/glass_box_0008_normalised
+glass_box/train/glass_box_0059_normalised
+glass_box/train/glass_box_0154_normalised
+glass_box/train/glass_box_0080_normalised
+glass_box/train/glass_box_0137_normalised
+glass_box/train/glass_box_0159_normalised
+glass_box/train/glass_box_0117_normalised
+glass_box/train/glass_box_0065_normalised
+glass_box/train/glass_box_0061_normalised
+glass_box/train/glass_box_0163_normalised
+glass_box/train/glass_box_0031_normalised
+glass_box/train/glass_box_0023_normalised
+glass_box/test/glass_box_0248_normalised
+glass_box/test/glass_box_0194_normalised
+glass_box/test/glass_box_0236_normalised
+glass_box/test/glass_box_0234_normalised
+glass_box/test/glass_box_0249_normalised
+glass_box/test/glass_box_0178_normalised
+glass_box/test/glass_box_0244_normalised
+glass_box/test/glass_box_0243_normalised
+glass_box/test/glass_box_0192_normalised
+glass_box/test/glass_box_0200_normalised
+glass_box/test/glass_box_0263_normalised
+glass_box/test/glass_box_0267_normalised
+glass_box/test/glass_box_0270_normalised
+glass_box/test/glass_box_0221_normalised
+glass_box/test/glass_box_0257_normalised
+glass_box/test/glass_box_0226_normalised
+glass_box/test/glass_box_0176_normalised
+glass_box/test/glass_box_0182_normalised
+glass_box/test/glass_box_0254_normalised
+glass_box/test/glass_box_0224_normalised
+glass_box/test/glass_box_0232_normalised
+glass_box/test/glass_box_0213_normalised
+glass_box/test/glass_box_0247_normalised
+glass_box/test/glass_box_0225_normalised
+glass_box/test/glass_box_0231_normalised
+glass_box/test/glass_box_0205_normalised
+glass_box/test/glass_box_0198_normalised
+glass_box/test/glass_box_0260_normalised
+glass_box/test/glass_box_0174_normalised
+glass_box/test/glass_box_0262_normalised
+glass_box/test/glass_box_0269_normalised
+glass_box/test/glass_box_0229_normalised
+glass_box/test/glass_box_0172_normalised
+glass_box/test/glass_box_0210_normalised
+glass_box/test/glass_box_0251_normalised
+glass_box/test/glass_box_0265_normalised
+glass_box/test/glass_box_0238_normalised
+glass_box/test/glass_box_0183_normalised
+glass_box/test/glass_box_0204_normalised
+glass_box/test/glass_box_0252_normalised
+glass_box/test/glass_box_0206_normalised
+glass_box/test/glass_box_0175_normalised
+glass_box/test/glass_box_0235_normalised
+glass_box/test/glass_box_0237_normalised
+glass_box/test/glass_box_0214_normalised
+glass_box/test/glass_box_0179_normalised
+glass_box/test/glass_box_0228_normalised
+glass_box/test/glass_box_0181_normalised
+glass_box/test/glass_box_0242_normalised
+glass_box/test/glass_box_0193_normalised
+glass_box/test/glass_box_0261_normalised
+glass_box/test/glass_box_0268_normalised
+glass_box/test/glass_box_0180_normalised
+glass_box/test/glass_box_0255_normalised
+glass_box/test/glass_box_0245_normalised
+glass_box/test/glass_box_0203_normalised
+glass_box/test/glass_box_0202_normalised
+glass_box/test/glass_box_0266_normalised
+glass_box/test/glass_box_0208_normalised
+glass_box/test/glass_box_0230_normalised
+glass_box/test/glass_box_0216_normalised
+glass_box/test/glass_box_0184_normalised
+glass_box/test/glass_box_0190_normalised
+glass_box/test/glass_box_0222_normalised
+glass_box/test/glass_box_0240_normalised
+glass_box/test/glass_box_0271_normalised
+glass_box/test/glass_box_0253_normalised
+glass_box/test/glass_box_0212_normalised
+glass_box/test/glass_box_0197_normalised
+glass_box/test/glass_box_0199_normalised
+glass_box/test/glass_box_0188_normalised
+glass_box/test/glass_box_0241_normalised
+glass_box/test/glass_box_0227_normalised
+glass_box/test/glass_box_0196_normalised
+glass_box/test/glass_box_0219_normalised
+glass_box/test/glass_box_0223_normalised
+glass_box/test/glass_box_0187_normalised
+glass_box/test/glass_box_0211_normalised
+glass_box/test/glass_box_0217_normalised
+glass_box/test/glass_box_0220_normalised
+glass_box/test/glass_box_0177_normalised
+glass_box/test/glass_box_0258_normalised
+glass_box/test/glass_box_0246_normalised
+glass_box/test/glass_box_0195_normalised
+glass_box/test/glass_box_0250_normalised
+glass_box/test/glass_box_0256_normalised
+glass_box/test/glass_box_0173_normalised
+glass_box/test/glass_box_0215_normalised
+glass_box/test/glass_box_0239_normalised
+glass_box/test/glass_box_0185_normalised
+glass_box/test/glass_box_0209_normalised
+glass_box/test/glass_box_0207_normalised
+glass_box/test/glass_box_0264_normalised
+glass_box/test/glass_box_0259_normalised
+glass_box/test/glass_box_0191_normalised
+glass_box/test/glass_box_0233_normalised
+glass_box/test/glass_box_0189_normalised
+glass_box/test/glass_box_0218_normalised
+glass_box/test/glass_box_0201_normalised
+glass_box/test/glass_box_0186_normalised
+laptop/train/laptop_0057_normalised
+laptop/train/laptop_0054_normalised
+laptop/train/laptop_0037_normalised
+laptop/train/laptop_0036_normalised
+laptop/train/laptop_0141_normalised
+laptop/train/laptop_0091_normalised
+laptop/train/laptop_0117_normalised
+laptop/train/laptop_0060_normalised
+laptop/train/laptop_0132_normalised
+laptop/train/laptop_0038_normalised
+laptop/train/laptop_0148_normalised
+laptop/train/laptop_0085_normalised
+laptop/train/laptop_0084_normalised
+laptop/train/laptop_0118_normalised
+laptop/train/laptop_0107_normalised
+laptop/train/laptop_0120_normalised
+laptop/train/laptop_0015_normalised
+laptop/train/laptop_0131_normalised
+laptop/train/laptop_0064_normalised
+laptop/train/laptop_0097_normalised
+laptop/train/laptop_0014_normalised
+laptop/train/laptop_0012_normalised
+laptop/train/laptop_0069_normalised
+laptop/train/laptop_0047_normalised
+laptop/train/laptop_0101_normalised
+laptop/train/laptop_0100_normalised
+laptop/train/laptop_0125_normalised
+laptop/train/laptop_0041_normalised
+laptop/train/laptop_0026_normalised
+laptop/train/laptop_0113_normalised
+laptop/train/laptop_0035_normalised
+laptop/train/laptop_0077_normalised
+laptop/train/laptop_0090_normalised
+laptop/train/laptop_0080_normalised
+laptop/train/laptop_0010_normalised
+laptop/train/laptop_0095_normalised
+laptop/train/laptop_0006_normalised
+laptop/train/laptop_0098_normalised
+laptop/train/laptop_0031_normalised
+laptop/train/laptop_0094_normalised
+laptop/train/laptop_0059_normalised
+laptop/train/laptop_0106_normalised
+laptop/train/laptop_0110_normalised
+laptop/train/laptop_0074_normalised
+laptop/train/laptop_0023_normalised
+laptop/train/laptop_0121_normalised
+laptop/train/laptop_0045_normalised
+laptop/train/laptop_0053_normalised
+laptop/train/laptop_0096_normalised
+laptop/train/laptop_0016_normalised
+laptop/train/laptop_0022_normalised
+laptop/train/laptop_0109_normalised
+laptop/train/laptop_0018_normalised
+laptop/train/laptop_0149_normalised
+laptop/train/laptop_0092_normalised
+laptop/train/laptop_0004_normalised
+laptop/train/laptop_0008_normalised
+laptop/train/laptop_0116_normalised
+laptop/train/laptop_0028_normalised
+laptop/train/laptop_0020_normalised
+laptop/train/laptop_0102_normalised
+laptop/train/laptop_0083_normalised
+laptop/train/laptop_0055_normalised
+laptop/train/laptop_0007_normalised
+laptop/train/laptop_0044_normalised
+laptop/train/laptop_0071_normalised
+laptop/train/laptop_0136_normalised
+laptop/train/laptop_0039_normalised
+laptop/train/laptop_0133_normalised
+laptop/train/laptop_0009_normalised
+laptop/train/laptop_0139_normalised
+laptop/train/laptop_0013_normalised
+laptop/train/laptop_0089_normalised
+laptop/train/laptop_0087_normalised
+laptop/train/laptop_0128_normalised
+laptop/train/laptop_0067_normalised
+laptop/train/laptop_0070_normalised
+laptop/train/laptop_0073_normalised
+laptop/train/laptop_0078_normalised
+laptop/train/laptop_0003_normalised
+laptop/train/laptop_0017_normalised
+laptop/train/laptop_0052_normalised
+laptop/train/laptop_0119_normalised
+laptop/train/laptop_0140_normalised
+laptop/train/laptop_0088_normalised
+laptop/train/laptop_0126_normalised
+laptop/train/laptop_0081_normalised
+laptop/train/laptop_0021_normalised
+laptop/train/laptop_0112_normalised
+laptop/train/laptop_0033_normalised
+laptop/train/laptop_0099_normalised
+laptop/train/laptop_0049_normalised
+laptop/train/laptop_0063_normalised
+laptop/train/laptop_0103_normalised
+laptop/train/laptop_0123_normalised
+laptop/train/laptop_0142_normalised
+laptop/train/laptop_0043_normalised
+laptop/train/laptop_0061_normalised
+laptop/train/laptop_0048_normalised
+laptop/train/laptop_0076_normalised
+laptop/train/laptop_0145_normalised
+laptop/train/laptop_0122_normalised
+laptop/train/laptop_0115_normalised
+laptop/train/laptop_0104_normalised
+laptop/train/laptop_0034_normalised
+laptop/train/laptop_0019_normalised
+laptop/train/laptop_0130_normalised
+laptop/train/laptop_0050_normalised
+laptop/train/laptop_0051_normalised
+laptop/train/laptop_0056_normalised
+laptop/train/laptop_0001_normalised
+laptop/train/laptop_0135_normalised
+laptop/train/laptop_0082_normalised
+laptop/train/laptop_0093_normalised
+laptop/train/laptop_0068_normalised
+laptop/train/laptop_0137_normalised
+laptop/train/laptop_0029_normalised
+laptop/train/laptop_0042_normalised
+laptop/train/laptop_0147_normalised
+laptop/train/laptop_0075_normalised
+laptop/train/laptop_0058_normalised
+laptop/train/laptop_0108_normalised
+laptop/train/laptop_0134_normalised
+laptop/train/laptop_0030_normalised
+laptop/train/laptop_0002_normalised
+laptop/train/laptop_0143_normalised
+laptop/train/laptop_0072_normalised
+laptop/train/laptop_0129_normalised
+laptop/train/laptop_0025_normalised
+laptop/train/laptop_0024_normalised
+laptop/train/laptop_0040_normalised
+laptop/train/laptop_0114_normalised
+laptop/train/laptop_0127_normalised
+laptop/train/laptop_0027_normalised
+laptop/train/laptop_0011_normalised
+laptop/train/laptop_0032_normalised
+laptop/train/laptop_0079_normalised
+laptop/train/laptop_0066_normalised
+laptop/train/laptop_0124_normalised
+laptop/train/laptop_0065_normalised
+laptop/train/laptop_0138_normalised
+laptop/train/laptop_0144_normalised
+laptop/train/laptop_0062_normalised
+laptop/train/laptop_0146_normalised
+laptop/train/laptop_0111_normalised
+laptop/train/laptop_0046_normalised
+laptop/train/laptop_0086_normalised
+laptop/train/laptop_0105_normalised
+laptop/train/laptop_0005_normalised
+laptop/test/laptop_0153_normalised
+laptop/test/laptop_0165_normalised
+laptop/test/laptop_0158_normalised
+laptop/test/laptop_0154_normalised
+laptop/test/laptop_0150_normalised
+laptop/test/laptop_0162_normalised
+laptop/test/laptop_0169_normalised
+laptop/test/laptop_0159_normalised
+laptop/test/laptop_0168_normalised
+laptop/test/laptop_0164_normalised
+laptop/test/laptop_0156_normalised
+laptop/test/laptop_0167_normalised
+laptop/test/laptop_0155_normalised
+laptop/test/laptop_0152_normalised
+laptop/test/laptop_0160_normalised
+laptop/test/laptop_0161_normalised
+laptop/test/laptop_0163_normalised
+laptop/test/laptop_0166_normalised
+laptop/test/laptop_0151_normalised
+laptop/test/laptop_0157_normalised
+stairs/train/stairs_0011_normalised
+stairs/train/stairs_0058_normalised
+stairs/train/stairs_0014_normalised
+stairs/train/stairs_0006_normalised
+stairs/train/stairs_0110_normalised
+stairs/train/stairs_0095_normalised
+stairs/train/stairs_0019_normalised
+stairs/train/stairs_0024_normalised
+stairs/train/stairs_0031_normalised
+stairs/train/stairs_0002_normalised
+stairs/train/stairs_0077_normalised
+stairs/train/stairs_0048_normalised
+stairs/train/stairs_0029_normalised
+stairs/train/stairs_0123_normalised
+stairs/train/stairs_0059_normalised
+stairs/train/stairs_0111_normalised
+stairs/train/stairs_0004_normalised
+stairs/train/stairs_0101_normalised
+stairs/train/stairs_0054_normalised
+stairs/train/stairs_0020_normalised
+stairs/train/stairs_0063_normalised
+stairs/train/stairs_0056_normalised
+stairs/train/stairs_0082_normalised
+stairs/train/stairs_0005_normalised
+stairs/train/stairs_0062_normalised
+stairs/train/stairs_0040_normalised
+stairs/train/stairs_0032_normalised
+stairs/train/stairs_0042_normalised
+stairs/train/stairs_0086_normalised
+stairs/train/stairs_0041_normalised
+stairs/train/stairs_0016_normalised
+stairs/train/stairs_0023_normalised
+stairs/train/stairs_0018_normalised
+stairs/train/stairs_0051_normalised
+stairs/train/stairs_0008_normalised
+stairs/train/stairs_0074_normalised
+stairs/train/stairs_0027_normalised
+stairs/train/stairs_0116_normalised
+stairs/train/stairs_0039_normalised
+stairs/train/stairs_0009_normalised
+stairs/train/stairs_0114_normalised
+stairs/train/stairs_0030_normalised
+stairs/train/stairs_0091_normalised
+stairs/train/stairs_0047_normalised
+stairs/train/stairs_0028_normalised
+stairs/train/stairs_0010_normalised
+stairs/train/stairs_0088_normalised
+stairs/train/stairs_0108_normalised
+stairs/train/stairs_0003_normalised
+stairs/train/stairs_0055_normalised
+stairs/train/stairs_0034_normalised
+stairs/train/stairs_0076_normalised
+stairs/train/stairs_0122_normalised
+stairs/train/stairs_0107_normalised
+stairs/train/stairs_0084_normalised
+stairs/train/stairs_0036_normalised
+stairs/train/stairs_0089_normalised
+stairs/train/stairs_0001_normalised
+stairs/train/stairs_0121_normalised
+stairs/train/stairs_0085_normalised
+stairs/train/stairs_0072_normalised
+stairs/train/stairs_0113_normalised
+stairs/train/stairs_0120_normalised
+stairs/train/stairs_0070_normalised
+stairs/train/stairs_0021_normalised
+stairs/train/stairs_0100_normalised
+stairs/train/stairs_0050_normalised
+stairs/train/stairs_0022_normalised
+stairs/train/stairs_0079_normalised
+stairs/train/stairs_0083_normalised
+stairs/train/stairs_0049_normalised
+stairs/train/stairs_0068_normalised
+stairs/train/stairs_0106_normalised
+stairs/train/stairs_0078_normalised
+stairs/train/stairs_0119_normalised
+stairs/train/stairs_0067_normalised
+stairs/train/stairs_0065_normalised
+stairs/train/stairs_0098_normalised
+stairs/train/stairs_0038_normalised
+stairs/train/stairs_0017_normalised
+stairs/train/stairs_0080_normalised
+stairs/train/stairs_0118_normalised
+stairs/train/stairs_0060_normalised
+stairs/train/stairs_0052_normalised
+stairs/train/stairs_0124_normalised
+stairs/train/stairs_0015_normalised
+stairs/train/stairs_0069_normalised
+stairs/train/stairs_0115_normalised
+stairs/train/stairs_0046_normalised
+stairs/train/stairs_0081_normalised
+stairs/train/stairs_0044_normalised
+stairs/train/stairs_0043_normalised
+stairs/train/stairs_0117_normalised
+stairs/train/stairs_0102_normalised
+stairs/train/stairs_0066_normalised
+stairs/train/stairs_0075_normalised
+stairs/train/stairs_0007_normalised
+stairs/train/stairs_0035_normalised
+stairs/train/stairs_0061_normalised
+stairs/train/stairs_0053_normalised
+stairs/train/stairs_0026_normalised
+stairs/train/stairs_0112_normalised
+stairs/train/stairs_0087_normalised
+stairs/train/stairs_0037_normalised
+stairs/train/stairs_0109_normalised
+stairs/train/stairs_0012_normalised
+stairs/train/stairs_0105_normalised
+stairs/train/stairs_0013_normalised
+stairs/train/stairs_0025_normalised
+stairs/train/stairs_0097_normalised
+stairs/train/stairs_0073_normalised
+stairs/train/stairs_0093_normalised
+stairs/train/stairs_0090_normalised
+stairs/train/stairs_0045_normalised
+stairs/train/stairs_0104_normalised
+stairs/train/stairs_0092_normalised
+stairs/train/stairs_0099_normalised
+stairs/train/stairs_0096_normalised
+stairs/train/stairs_0071_normalised
+stairs/train/stairs_0103_normalised
+stairs/train/stairs_0057_normalised
+stairs/train/stairs_0064_normalised
+stairs/train/stairs_0094_normalised
+stairs/train/stairs_0033_normalised
+stairs/test/stairs_0131_normalised
+stairs/test/stairs_0125_normalised
+stairs/test/stairs_0136_normalised
+stairs/test/stairs_0133_normalised
+stairs/test/stairs_0141_normalised
+stairs/test/stairs_0137_normalised
+stairs/test/stairs_0142_normalised
+stairs/test/stairs_0128_normalised
+stairs/test/stairs_0129_normalised
+stairs/test/stairs_0135_normalised
+stairs/test/stairs_0138_normalised
+stairs/test/stairs_0139_normalised
+stairs/test/stairs_0132_normalised
+stairs/test/stairs_0130_normalised
+stairs/test/stairs_0134_normalised
+stairs/test/stairs_0143_normalised
+stairs/test/stairs_0126_normalised
+stairs/test/stairs_0127_normalised
+stairs/test/stairs_0140_normalised
+stairs/test/stairs_0144_normalised
+plant/train/plant_0079_normalised
+plant/train/plant_0010_normalised
+plant/train/plant_0001_normalised
+plant/train/plant_0212_normalised
+plant/train/plant_0194_normalised
+plant/train/plant_0069_normalised
+plant/train/plant_0023_normalised
+plant/train/plant_0116_normalised
+plant/train/plant_0064_normalised
+plant/train/plant_0090_normalised
+plant/train/plant_0018_normalised
+plant/train/plant_0026_normalised
+plant/train/plant_0230_normalised
+plant/train/plant_0072_normalised
+plant/train/plant_0135_normalised
+plant/train/plant_0204_normalised
+plant/train/plant_0005_normalised
+plant/train/plant_0138_normalised
+plant/train/plant_0120_normalised
+plant/train/plant_0180_normalised
+plant/train/plant_0046_normalised
+plant/train/plant_0187_normalised
+plant/train/plant_0147_normalised
+plant/train/plant_0060_normalised
+plant/train/plant_0094_normalised
+plant/train/plant_0231_normalised
+plant/train/plant_0134_normalised
+plant/train/plant_0139_normalised
+plant/train/plant_0163_normalised
+plant/train/plant_0174_normalised
+plant/train/plant_0184_normalised
+plant/train/plant_0197_normalised
+plant/train/plant_0083_normalised
+plant/train/plant_0053_normalised
+plant/train/plant_0015_normalised
+plant/train/plant_0201_normalised
+plant/train/plant_0038_normalised
+plant/train/plant_0087_normalised
+plant/train/plant_0088_normalised
+plant/train/plant_0198_normalised
+plant/train/plant_0117_normalised
+plant/train/plant_0227_normalised
+plant/train/plant_0137_normalised
+plant/train/plant_0009_normalised
+plant/train/plant_0218_normalised
+plant/train/plant_0044_normalised
+plant/train/plant_0093_normalised
+plant/train/plant_0167_normalised
+plant/train/plant_0161_normalised
+plant/train/plant_0043_normalised
+plant/train/plant_0111_normalised
+plant/train/plant_0035_normalised
+plant/train/plant_0006_normalised
+plant/train/plant_0178_normalised
+plant/train/plant_0215_normalised
+plant/train/plant_0080_normalised
+plant/train/plant_0179_normalised
+plant/train/plant_0108_normalised
+plant/train/plant_0177_normalised
+plant/train/plant_0054_normalised
+plant/train/plant_0068_normalised
+plant/train/plant_0050_normalised
+plant/train/plant_0021_normalised
+plant/train/plant_0031_normalised
+plant/train/plant_0162_normalised
+plant/train/plant_0168_normalised
+plant/train/plant_0152_normalised
+plant/train/plant_0149_normalised
+plant/train/plant_0029_normalised
+plant/train/plant_0214_normalised
+plant/train/plant_0209_normalised
+plant/train/plant_0127_normalised
+plant/train/plant_0144_normalised
+plant/train/plant_0007_normalised
+plant/train/plant_0133_normalised
+plant/train/plant_0077_normalised
+plant/train/plant_0129_normalised
+plant/train/plant_0131_normalised
+plant/train/plant_0171_normalised
+plant/train/plant_0157_normalised
+plant/train/plant_0208_normalised
+plant/train/plant_0223_normalised
+plant/train/plant_0205_normalised
+plant/train/plant_0104_normalised
+plant/train/plant_0075_normalised
+plant/train/plant_0159_normalised
+plant/train/plant_0188_normalised
+plant/train/plant_0063_normalised
+plant/train/plant_0202_normalised
+plant/train/plant_0081_normalised
+plant/train/plant_0222_normalised
+plant/train/plant_0020_normalised
+plant/train/plant_0132_normalised
+plant/train/plant_0082_normalised
+plant/train/plant_0004_normalised
+plant/train/plant_0191_normalised
+plant/train/plant_0136_normalised
+plant/train/plant_0226_normalised
+plant/train/plant_0155_normalised
+plant/train/plant_0200_normalised
+plant/train/plant_0140_normalised
+plant/train/plant_0166_normalised
+plant/train/plant_0041_normalised
+plant/train/plant_0206_normalised
+plant/train/plant_0040_normalised
+plant/train/plant_0189_normalised
+plant/train/plant_0002_normalised
+plant/train/plant_0036_normalised
+plant/train/plant_0237_normalised
+plant/train/plant_0098_normalised
+plant/train/plant_0175_normalised
+plant/train/plant_0233_normalised
+plant/train/plant_0067_normalised
+plant/train/plant_0225_normalised
+plant/train/plant_0057_normalised
+plant/train/plant_0228_normalised
+plant/train/plant_0028_normalised
+plant/train/plant_0118_normalised
+plant/train/plant_0025_normalised
+plant/train/plant_0121_normalised
+plant/train/plant_0238_normalised
+plant/train/plant_0160_normalised
+plant/train/plant_0112_normalised
+plant/train/plant_0146_normalised
+plant/train/plant_0213_normalised
+plant/train/plant_0217_normalised
+plant/train/plant_0085_normalised
+plant/train/plant_0097_normalised
+plant/train/plant_0061_normalised
+plant/train/plant_0019_normalised
+plant/train/plant_0109_normalised
+plant/train/plant_0193_normalised
+plant/train/plant_0102_normalised
+plant/train/plant_0216_normalised
+plant/train/plant_0172_normalised
+plant/train/plant_0203_normalised
+plant/train/plant_0030_normalised
+plant/train/plant_0099_normalised
+plant/train/plant_0076_normalised
+plant/train/plant_0143_normalised
+plant/train/plant_0153_normalised
+plant/train/plant_0156_normalised
+plant/train/plant_0066_normalised
+plant/train/plant_0182_normalised
+plant/train/plant_0065_normalised
+plant/train/plant_0022_normalised
+plant/train/plant_0033_normalised
+plant/train/plant_0095_normalised
+plant/train/plant_0183_normalised
+plant/train/plant_0016_normalised
+plant/train/plant_0126_normalised
+plant/train/plant_0119_normalised
+plant/train/plant_0176_normalised
+plant/train/plant_0042_normalised
+plant/train/plant_0086_normalised
+plant/train/plant_0045_normalised
+plant/train/plant_0164_normalised
+plant/train/plant_0196_normalised
+plant/train/plant_0123_normalised
+plant/train/plant_0032_normalised
+plant/train/plant_0199_normalised
+plant/train/plant_0012_normalised
+plant/train/plant_0014_normalised
+plant/train/plant_0091_normalised
+plant/train/plant_0141_normalised
+plant/train/plant_0034_normalised
+plant/train/plant_0103_normalised
+plant/train/plant_0145_normalised
+plant/train/plant_0240_normalised
+plant/train/plant_0158_normalised
+plant/train/plant_0169_normalised
+plant/train/plant_0239_normalised
+plant/train/plant_0074_normalised
+plant/train/plant_0124_normalised
+plant/train/plant_0055_normalised
+plant/train/plant_0073_normalised
+plant/train/plant_0070_normalised
+plant/train/plant_0114_normalised
+plant/train/plant_0221_normalised
+plant/train/plant_0236_normalised
+plant/train/plant_0084_normalised
+plant/train/plant_0008_normalised
+plant/train/plant_0219_normalised
+plant/train/plant_0017_normalised
+plant/train/plant_0192_normalised
+plant/train/plant_0207_normalised
+plant/train/plant_0100_normalised
+plant/train/plant_0051_normalised
+plant/train/plant_0165_normalised
+plant/train/plant_0190_normalised
+plant/train/plant_0154_normalised
+plant/train/plant_0027_normalised
+plant/train/plant_0148_normalised
+plant/train/plant_0047_normalised
+plant/train/plant_0173_normalised
+plant/train/plant_0013_normalised
+plant/train/plant_0039_normalised
+plant/train/plant_0089_normalised
+plant/train/plant_0058_normalised
+plant/train/plant_0185_normalised
+plant/train/plant_0151_normalised
+plant/train/plant_0037_normalised
+plant/train/plant_0186_normalised
+plant/train/plant_0056_normalised
+plant/train/plant_0092_normalised
+plant/train/plant_0210_normalised
+plant/train/plant_0150_normalised
+plant/train/plant_0181_normalised
+plant/train/plant_0232_normalised
+plant/train/plant_0234_normalised
+plant/train/plant_0024_normalised
+plant/train/plant_0122_normalised
+plant/train/plant_0170_normalised
+plant/train/plant_0142_normalised
+plant/train/plant_0105_normalised
+plant/train/plant_0011_normalised
+plant/train/plant_0128_normalised
+plant/train/plant_0130_normalised
+plant/train/plant_0106_normalised
+plant/train/plant_0096_normalised
+plant/train/plant_0062_normalised
+plant/train/plant_0101_normalised
+plant/train/plant_0113_normalised
+plant/train/plant_0224_normalised
+plant/train/plant_0115_normalised
+plant/train/plant_0078_normalised
+plant/train/plant_0125_normalised
+plant/train/plant_0211_normalised
+plant/train/plant_0071_normalised
+plant/train/plant_0003_normalised
+plant/train/plant_0110_normalised
+plant/train/plant_0059_normalised
+plant/train/plant_0220_normalised
+plant/train/plant_0052_normalised
+plant/train/plant_0229_normalised
+plant/train/plant_0195_normalised
+plant/train/plant_0235_normalised
+plant/train/plant_0049_normalised
+plant/train/plant_0107_normalised
+plant/train/plant_0048_normalised
+plant/test/plant_0328_normalised
+plant/test/plant_0248_normalised
+plant/test/plant_0321_normalised
+plant/test/plant_0285_normalised
+plant/test/plant_0244_normalised
+plant/test/plant_0275_normalised
+plant/test/plant_0269_normalised
+plant/test/plant_0273_normalised
+plant/test/plant_0259_normalised
+plant/test/plant_0317_normalised
+plant/test/plant_0291_normalised
+plant/test/plant_0320_normalised
+plant/test/plant_0280_normalised
+plant/test/plant_0286_normalised
+plant/test/plant_0296_normalised
+plant/test/plant_0309_normalised
+plant/test/plant_0301_normalised
+plant/test/plant_0289_normalised
+plant/test/plant_0334_normalised
+plant/test/plant_0265_normalised
+plant/test/plant_0279_normalised
+plant/test/plant_0241_normalised
+plant/test/plant_0337_normalised
+plant/test/plant_0300_normalised
+plant/test/plant_0311_normalised
+plant/test/plant_0283_normalised
+plant/test/plant_0308_normalised
+plant/test/plant_0261_normalised
+plant/test/plant_0329_normalised
+plant/test/plant_0268_normalised
+plant/test/plant_0245_normalised
+plant/test/plant_0252_normalised
+plant/test/plant_0288_normalised
+plant/test/plant_0307_normalised
+plant/test/plant_0258_normalised
+plant/test/plant_0249_normalised
+plant/test/plant_0340_normalised
+plant/test/plant_0324_normalised
+plant/test/plant_0257_normalised
+plant/test/plant_0305_normalised
+plant/test/plant_0295_normalised
+plant/test/plant_0247_normalised
+plant/test/plant_0336_normalised
+plant/test/plant_0284_normalised
+plant/test/plant_0276_normalised
+plant/test/plant_0326_normalised
+plant/test/plant_0322_normalised
+plant/test/plant_0254_normalised
+plant/test/plant_0293_normalised
+plant/test/plant_0260_normalised
+plant/test/plant_0298_normalised
+plant/test/plant_0318_normalised
+plant/test/plant_0272_normalised
+plant/test/plant_0332_normalised
+plant/test/plant_0262_normalised
+plant/test/plant_0312_normalised
+plant/test/plant_0250_normalised
+plant/test/plant_0242_normalised
+plant/test/plant_0316_normalised
+plant/test/plant_0255_normalised
+plant/test/plant_0270_normalised
+plant/test/plant_0243_normalised
+plant/test/plant_0333_normalised
+plant/test/plant_0251_normalised
+plant/test/plant_0256_normalised
+plant/test/plant_0335_normalised
+plant/test/plant_0339_normalised
+plant/test/plant_0282_normalised
+plant/test/plant_0313_normalised
+plant/test/plant_0267_normalised
+plant/test/plant_0287_normalised
+plant/test/plant_0294_normalised
+plant/test/plant_0290_normalised
+plant/test/plant_0264_normalised
+plant/test/plant_0292_normalised
+plant/test/plant_0274_normalised
+plant/test/plant_0266_normalised
+plant/test/plant_0327_normalised
+plant/test/plant_0263_normalised
+plant/test/plant_0278_normalised
+plant/test/plant_0338_normalised
+plant/test/plant_0306_normalised
+plant/test/plant_0299_normalised
+plant/test/plant_0331_normalised
+plant/test/plant_0304_normalised
+plant/test/plant_0253_normalised
+plant/test/plant_0315_normalised
+plant/test/plant_0325_normalised
+plant/test/plant_0323_normalised
+plant/test/plant_0303_normalised
+plant/test/plant_0302_normalised
+plant/test/plant_0277_normalised
+plant/test/plant_0319_normalised
+plant/test/plant_0310_normalised
+plant/test/plant_0297_normalised
+plant/test/plant_0330_normalised
+plant/test/plant_0314_normalised
+plant/test/plant_0246_normalised
+plant/test/plant_0281_normalised
+plant/test/plant_0271_normalised
+bathtub/train/bathtub_0105_normalised
+bathtub/train/bathtub_0098_normalised
+bathtub/train/bathtub_0088_normalised
+bathtub/train/bathtub_0008_normalised
+bathtub/train/bathtub_0043_normalised
+bathtub/train/bathtub_0081_normalised
+bathtub/train/bathtub_0009_normalised
+bathtub/train/bathtub_0079_normalised
+bathtub/train/bathtub_0067_normalised
+bathtub/train/bathtub_0032_normalised
+bathtub/train/bathtub_0012_normalised
+bathtub/train/bathtub_0072_normalised
+bathtub/train/bathtub_0003_normalised
+bathtub/train/bathtub_0059_normalised
+bathtub/train/bathtub_0061_normalised
+bathtub/train/bathtub_0089_normalised
+bathtub/train/bathtub_0006_normalised
+bathtub/train/bathtub_0097_normalised
+bathtub/train/bathtub_0010_normalised
+bathtub/train/bathtub_0101_normalised
+bathtub/train/bathtub_0047_normalised
+bathtub/train/bathtub_0027_normalised
+bathtub/train/bathtub_0033_normalised
+bathtub/train/bathtub_0058_normalised
+bathtub/train/bathtub_0035_normalised
+bathtub/train/bathtub_0014_normalised
+bathtub/train/bathtub_0069_normalised
+bathtub/train/bathtub_0048_normalised
+bathtub/train/bathtub_0031_normalised
+bathtub/train/bathtub_0011_normalised
+bathtub/train/bathtub_0056_normalised
+bathtub/train/bathtub_0037_normalised
+bathtub/train/bathtub_0076_normalised
+bathtub/train/bathtub_0007_normalised
+bathtub/train/bathtub_0062_normalised
+bathtub/train/bathtub_0093_normalised
+bathtub/train/bathtub_0034_normalised
+bathtub/train/bathtub_0064_normalised
+bathtub/train/bathtub_0104_normalised
+bathtub/train/bathtub_0071_normalised
+bathtub/train/bathtub_0016_normalised
+bathtub/train/bathtub_0054_normalised
+bathtub/train/bathtub_0085_normalised
+bathtub/train/bathtub_0092_normalised
+bathtub/train/bathtub_0040_normalised
+bathtub/train/bathtub_0046_normalised
+bathtub/train/bathtub_0024_normalised
+bathtub/train/bathtub_0042_normalised
+bathtub/train/bathtub_0086_normalised
+bathtub/train/bathtub_0022_normalised
+bathtub/train/bathtub_0082_normalised
+bathtub/train/bathtub_0063_normalised
+bathtub/train/bathtub_0053_normalised
+bathtub/train/bathtub_0004_normalised
+bathtub/train/bathtub_0096_normalised
+bathtub/train/bathtub_0021_normalised
+bathtub/train/bathtub_0036_normalised
+bathtub/train/bathtub_0030_normalised
+bathtub/train/bathtub_0051_normalised
+bathtub/train/bathtub_0041_normalised
+bathtub/train/bathtub_0068_normalised
+bathtub/train/bathtub_0044_normalised
+bathtub/train/bathtub_0002_normalised
+bathtub/train/bathtub_0087_normalised
+bathtub/train/bathtub_0091_normalised
+bathtub/train/bathtub_0100_normalised
+bathtub/train/bathtub_0084_normalised
+bathtub/train/bathtub_0094_normalised
+bathtub/train/bathtub_0102_normalised
+bathtub/train/bathtub_0052_normalised
+bathtub/train/bathtub_0080_normalised
+bathtub/train/bathtub_0077_normalised
+bathtub/train/bathtub_0049_normalised
+bathtub/train/bathtub_0029_normalised
+bathtub/train/bathtub_0013_normalised
+bathtub/train/bathtub_0060_normalised
+bathtub/train/bathtub_0020_normalised
+bathtub/train/bathtub_0103_normalised
+bathtub/train/bathtub_0095_normalised
+bathtub/train/bathtub_0017_normalised
+bathtub/train/bathtub_0018_normalised
+bathtub/train/bathtub_0090_normalised
+bathtub/train/bathtub_0019_normalised
+bathtub/train/bathtub_0001_normalised
+bathtub/train/bathtub_0005_normalised
+bathtub/train/bathtub_0099_normalised
+bathtub/train/bathtub_0070_normalised
+bathtub/train/bathtub_0045_normalised
+bathtub/train/bathtub_0073_normalised
+bathtub/train/bathtub_0025_normalised
+bathtub/train/bathtub_0066_normalised
+bathtub/train/bathtub_0078_normalised
+bathtub/train/bathtub_0057_normalised
+bathtub/train/bathtub_0039_normalised
+bathtub/train/bathtub_0075_normalised
+bathtub/train/bathtub_0038_normalised
+bathtub/train/bathtub_0055_normalised
+bathtub/train/bathtub_0074_normalised
+bathtub/train/bathtub_0026_normalised
+bathtub/train/bathtub_0050_normalised
+bathtub/train/bathtub_0028_normalised
+bathtub/train/bathtub_0083_normalised
+bathtub/train/bathtub_0023_normalised
+bathtub/train/bathtub_0065_normalised
+bathtub/train/bathtub_0015_normalised
+bathtub/train/bathtub_0106_normalised
+bathtub/test/bathtub_0120_normalised
+bathtub/test/bathtub_0133_normalised
+bathtub/test/bathtub_0130_normalised
+bathtub/test/bathtub_0152_normalised
+bathtub/test/bathtub_0113_normalised
+bathtub/test/bathtub_0131_normalised
+bathtub/test/bathtub_0156_normalised
+bathtub/test/bathtub_0138_normalised
+bathtub/test/bathtub_0127_normalised
+bathtub/test/bathtub_0150_normalised
+bathtub/test/bathtub_0107_normalised
+bathtub/test/bathtub_0109_normalised
+bathtub/test/bathtub_0146_normalised
+bathtub/test/bathtub_0125_normalised
+bathtub/test/bathtub_0153_normalised
+bathtub/test/bathtub_0122_normalised
+bathtub/test/bathtub_0121_normalised
+bathtub/test/bathtub_0155_normalised
+bathtub/test/bathtub_0134_normalised
+bathtub/test/bathtub_0114_normalised
+bathtub/test/bathtub_0126_normalised
+bathtub/test/bathtub_0118_normalised
+bathtub/test/bathtub_0116_normalised
+bathtub/test/bathtub_0135_normalised
+bathtub/test/bathtub_0123_normalised
+bathtub/test/bathtub_0137_normalised
+bathtub/test/bathtub_0142_normalised
+bathtub/test/bathtub_0115_normalised
+bathtub/test/bathtub_0143_normalised
+bathtub/test/bathtub_0151_normalised
+bathtub/test/bathtub_0132_normalised
+bathtub/test/bathtub_0149_normalised
+bathtub/test/bathtub_0112_normalised
+bathtub/test/bathtub_0124_normalised
+bathtub/test/bathtub_0128_normalised
+bathtub/test/bathtub_0148_normalised
+bathtub/test/bathtub_0110_normalised
+bathtub/test/bathtub_0111_normalised
+bathtub/test/bathtub_0147_normalised
+bathtub/test/bathtub_0117_normalised
+bathtub/test/bathtub_0145_normalised
+bathtub/test/bathtub_0141_normalised
+bathtub/test/bathtub_0108_normalised
+bathtub/test/bathtub_0140_normalised
+bathtub/test/bathtub_0139_normalised
+bathtub/test/bathtub_0154_normalised
+bathtub/test/bathtub_0136_normalised
+bathtub/test/bathtub_0119_normalised
+bathtub/test/bathtub_0144_normalised
+bathtub/test/bathtub_0129_normalised
+dresser/train/dresser_0039_normalised
+dresser/train/dresser_0081_normalised
+dresser/train/dresser_0124_normalised
+dresser/train/dresser_0024_normalised
+dresser/train/dresser_0113_normalised
+dresser/train/dresser_0060_normalised
+dresser/train/dresser_0107_normalised
+dresser/train/dresser_0027_normalised
+dresser/train/dresser_0121_normalised
+dresser/train/dresser_0133_normalised
+dresser/train/dresser_0102_normalised
+dresser/train/dresser_0026_normalised
+dresser/train/dresser_0154_normalised
+dresser/train/dresser_0109_normalised
+dresser/train/dresser_0157_normalised
+dresser/train/dresser_0116_normalised
+dresser/train/dresser_0177_normalised
+dresser/train/dresser_0173_normalised
+dresser/train/dresser_0180_normalised
+dresser/train/dresser_0040_normalised
+dresser/train/dresser_0139_normalised
+dresser/train/dresser_0188_normalised
+dresser/train/dresser_0047_normalised
+dresser/train/dresser_0170_normalised
+dresser/train/dresser_0162_normalised
+dresser/train/dresser_0126_normalised
+dresser/train/dresser_0160_normalised
+dresser/train/dresser_0161_normalised
+dresser/train/dresser_0115_normalised
+dresser/train/dresser_0076_normalised
+dresser/train/dresser_0073_normalised
+dresser/train/dresser_0174_normalised
+dresser/train/dresser_0159_normalised
+dresser/train/dresser_0034_normalised
+dresser/train/dresser_0014_normalised
+dresser/train/dresser_0098_normalised
+dresser/train/dresser_0128_normalised
+dresser/train/dresser_0071_normalised
+dresser/train/dresser_0066_normalised
+dresser/train/dresser_0199_normalised
+dresser/train/dresser_0095_normalised
+dresser/train/dresser_0072_normalised
+dresser/train/dresser_0110_normalised
+dresser/train/dresser_0163_normalised
+dresser/train/dresser_0187_normalised
+dresser/train/dresser_0017_normalised
+dresser/train/dresser_0101_normalised
+dresser/train/dresser_0032_normalised
+dresser/train/dresser_0096_normalised
+dresser/train/dresser_0008_normalised
+dresser/train/dresser_0013_normalised
+dresser/train/dresser_0061_normalised
+dresser/train/dresser_0156_normalised
+dresser/train/dresser_0019_normalised
+dresser/train/dresser_0087_normalised
+dresser/train/dresser_0182_normalised
+dresser/train/dresser_0197_normalised
+dresser/train/dresser_0054_normalised
+dresser/train/dresser_0059_normalised
+dresser/train/dresser_0168_normalised
+dresser/train/dresser_0097_normalised
+dresser/train/dresser_0056_normalised
+dresser/train/dresser_0004_normalised
+dresser/train/dresser_0030_normalised
+dresser/train/dresser_0002_normalised
+dresser/train/dresser_0080_normalised
+dresser/train/dresser_0029_normalised
+dresser/train/dresser_0015_normalised
+dresser/train/dresser_0007_normalised
+dresser/train/dresser_0141_normalised
+dresser/train/dresser_0028_normalised
+dresser/train/dresser_0151_normalised
+dresser/train/dresser_0155_normalised
+dresser/train/dresser_0045_normalised
+dresser/train/dresser_0036_normalised
+dresser/train/dresser_0114_normalised
+dresser/train/dresser_0083_normalised
+dresser/train/dresser_0191_normalised
+dresser/train/dresser_0079_normalised
+dresser/train/dresser_0009_normalised
+dresser/train/dresser_0147_normalised
+dresser/train/dresser_0144_normalised
+dresser/train/dresser_0198_normalised
+dresser/train/dresser_0104_normalised
+dresser/train/dresser_0011_normalised
+dresser/train/dresser_0041_normalised
+dresser/train/dresser_0038_normalised
+dresser/train/dresser_0143_normalised
+dresser/train/dresser_0123_normalised
+dresser/train/dresser_0184_normalised
+dresser/train/dresser_0176_normalised
+dresser/train/dresser_0091_normalised
+dresser/train/dresser_0179_normalised
+dresser/train/dresser_0075_normalised
+dresser/train/dresser_0053_normalised
+dresser/train/dresser_0033_normalised
+dresser/train/dresser_0099_normalised
+dresser/train/dresser_0086_normalised
+dresser/train/dresser_0152_normalised
+dresser/train/dresser_0020_normalised
+dresser/train/dresser_0134_normalised
+dresser/train/dresser_0051_normalised
+dresser/train/dresser_0149_normalised
+dresser/train/dresser_0145_normalised
+dresser/train/dresser_0050_normalised
+dresser/train/dresser_0055_normalised
+dresser/train/dresser_0078_normalised
+dresser/train/dresser_0077_normalised
+dresser/train/dresser_0031_normalised
+dresser/train/dresser_0003_normalised
+dresser/train/dresser_0094_normalised
+dresser/train/dresser_0138_normalised
+dresser/train/dresser_0172_normalised
+dresser/train/dresser_0185_normalised
+dresser/train/dresser_0064_normalised
+dresser/train/dresser_0164_normalised
+dresser/train/dresser_0023_normalised
+dresser/train/dresser_0131_normalised
+dresser/train/dresser_0194_normalised
+dresser/train/dresser_0165_normalised
+dresser/train/dresser_0189_normalised
+dresser/train/dresser_0146_normalised
+dresser/train/dresser_0130_normalised
+dresser/train/dresser_0065_normalised
+dresser/train/dresser_0106_normalised
+dresser/train/dresser_0043_normalised
+dresser/train/dresser_0190_normalised
+dresser/train/dresser_0166_normalised
+dresser/train/dresser_0048_normalised
+dresser/train/dresser_0117_normalised
+dresser/train/dresser_0153_normalised
+dresser/train/dresser_0122_normalised
+dresser/train/dresser_0192_normalised
+dresser/train/dresser_0132_normalised
+dresser/train/dresser_0069_normalised
+dresser/train/dresser_0067_normalised
+dresser/train/dresser_0196_normalised
+dresser/train/dresser_0042_normalised
+dresser/train/dresser_0112_normalised
+dresser/train/dresser_0129_normalised
+dresser/train/dresser_0068_normalised
+dresser/train/dresser_0169_normalised
+dresser/train/dresser_0108_normalised
+dresser/train/dresser_0195_normalised
+dresser/train/dresser_0118_normalised
+dresser/train/dresser_0085_normalised
+dresser/train/dresser_0022_normalised
+dresser/train/dresser_0005_normalised
+dresser/train/dresser_0120_normalised
+dresser/train/dresser_0100_normalised
+dresser/train/dresser_0010_normalised
+dresser/train/dresser_0136_normalised
+dresser/train/dresser_0025_normalised
+dresser/train/dresser_0049_normalised
+dresser/train/dresser_0074_normalised
+dresser/train/dresser_0084_normalised
+dresser/train/dresser_0057_normalised
+dresser/train/dresser_0127_normalised
+dresser/train/dresser_0001_normalised
+dresser/train/dresser_0037_normalised
+dresser/train/dresser_0125_normalised
+dresser/train/dresser_0111_normalised
+dresser/train/dresser_0140_normalised
+dresser/train/dresser_0105_normalised
+dresser/train/dresser_0158_normalised
+dresser/train/dresser_0150_normalised
+dresser/train/dresser_0167_normalised
+dresser/train/dresser_0171_normalised
+dresser/train/dresser_0178_normalised
+dresser/train/dresser_0012_normalised
+dresser/train/dresser_0137_normalised
+dresser/train/dresser_0044_normalised
+dresser/train/dresser_0070_normalised
+dresser/train/dresser_0142_normalised
+dresser/train/dresser_0082_normalised
+dresser/train/dresser_0186_normalised
+dresser/train/dresser_0062_normalised
+dresser/train/dresser_0175_normalised
+dresser/train/dresser_0018_normalised
+dresser/train/dresser_0046_normalised
+dresser/train/dresser_0006_normalised
+dresser/train/dresser_0181_normalised
+dresser/train/dresser_0035_normalised
+dresser/train/dresser_0193_normalised
+dresser/train/dresser_0092_normalised
+dresser/train/dresser_0135_normalised
+dresser/train/dresser_0063_normalised
+dresser/train/dresser_0089_normalised
+dresser/train/dresser_0183_normalised
+dresser/train/dresser_0088_normalised
+dresser/train/dresser_0016_normalised
+dresser/train/dresser_0090_normalised
+dresser/train/dresser_0103_normalised
+dresser/train/dresser_0021_normalised
+dresser/train/dresser_0058_normalised
+dresser/train/dresser_0200_normalised
+dresser/train/dresser_0119_normalised
+dresser/train/dresser_0093_normalised
+dresser/train/dresser_0052_normalised
+dresser/train/dresser_0148_normalised
+dresser/test/dresser_0270_normalised
+dresser/test/dresser_0253_normalised
+dresser/test/dresser_0274_normalised
+dresser/test/dresser_0272_normalised
+dresser/test/dresser_0266_normalised
+dresser/test/dresser_0255_normalised
+dresser/test/dresser_0281_normalised
+dresser/test/dresser_0271_normalised
+dresser/test/dresser_0240_normalised
+dresser/test/dresser_0243_normalised
+dresser/test/dresser_0246_normalised
+dresser/test/dresser_0259_normalised
+dresser/test/dresser_0265_normalised
+dresser/test/dresser_0229_normalised
+dresser/test/dresser_0273_normalised
+dresser/test/dresser_0215_normalised
+dresser/test/dresser_0256_normalised
+dresser/test/dresser_0201_normalised
+dresser/test/dresser_0260_normalised
+dresser/test/dresser_0242_normalised
+dresser/test/dresser_0285_normalised
+dresser/test/dresser_0250_normalised
+dresser/test/dresser_0263_normalised
+dresser/test/dresser_0220_normalised
+dresser/test/dresser_0261_normalised
+dresser/test/dresser_0213_normalised
+dresser/test/dresser_0232_normalised
+dresser/test/dresser_0226_normalised
+dresser/test/dresser_0214_normalised
+dresser/test/dresser_0249_normalised
+dresser/test/dresser_0269_normalised
+dresser/test/dresser_0221_normalised
+dresser/test/dresser_0233_normalised
+dresser/test/dresser_0227_normalised
+dresser/test/dresser_0280_normalised
+dresser/test/dresser_0223_normalised
+dresser/test/dresser_0238_normalised
+dresser/test/dresser_0222_normalised
+dresser/test/dresser_0202_normalised
+dresser/test/dresser_0277_normalised
+dresser/test/dresser_0262_normalised
+dresser/test/dresser_0230_normalised
+dresser/test/dresser_0279_normalised
+dresser/test/dresser_0275_normalised
+dresser/test/dresser_0218_normalised
+dresser/test/dresser_0225_normalised
+dresser/test/dresser_0219_normalised
+dresser/test/dresser_0224_normalised
+dresser/test/dresser_0268_normalised
+dresser/test/dresser_0207_normalised
+dresser/test/dresser_0210_normalised
+dresser/test/dresser_0286_normalised
+dresser/test/dresser_0257_normalised
+dresser/test/dresser_0208_normalised
+dresser/test/dresser_0267_normalised
+dresser/test/dresser_0247_normalised
+dresser/test/dresser_0237_normalised
+dresser/test/dresser_0217_normalised
+dresser/test/dresser_0239_normalised
+dresser/test/dresser_0241_normalised
+dresser/test/dresser_0206_normalised
+dresser/test/dresser_0284_normalised
+dresser/test/dresser_0258_normalised
+dresser/test/dresser_0283_normalised
+dresser/test/dresser_0211_normalised
+dresser/test/dresser_0228_normalised
+dresser/test/dresser_0244_normalised
+dresser/test/dresser_0254_normalised
+dresser/test/dresser_0216_normalised
+dresser/test/dresser_0251_normalised
+dresser/test/dresser_0252_normalised
+dresser/test/dresser_0209_normalised
+dresser/test/dresser_0203_normalised
+dresser/test/dresser_0264_normalised
+dresser/test/dresser_0231_normalised
+dresser/test/dresser_0276_normalised
+dresser/test/dresser_0278_normalised
+dresser/test/dresser_0235_normalised
+dresser/test/dresser_0205_normalised
+dresser/test/dresser_0245_normalised
+dresser/test/dresser_0234_normalised
+dresser/test/dresser_0248_normalised
+dresser/test/dresser_0212_normalised
+dresser/test/dresser_0236_normalised
+dresser/test/dresser_0204_normalised
+dresser/test/dresser_0282_normalised
+bottle/train/bottle_0087_normalised
+bottle/train/bottle_0033_normalised
+bottle/train/bottle_0028_normalised
+bottle/train/bottle_0111_normalised
+bottle/train/bottle_0173_normalised
+bottle/train/bottle_0012_normalised
+bottle/train/bottle_0051_normalised
+bottle/train/bottle_0309_normalised
+bottle/train/bottle_0026_normalised
+bottle/train/bottle_0302_normalised
+bottle/train/bottle_0113_normalised
+bottle/train/bottle_0213_normalised
+bottle/train/bottle_0005_normalised
+bottle/train/bottle_0042_normalised
+bottle/train/bottle_0194_normalised
+bottle/train/bottle_0011_normalised
+bottle/train/bottle_0314_normalised
+bottle/train/bottle_0178_normalised
+bottle/train/bottle_0245_normalised
+bottle/train/bottle_0299_normalised
+bottle/train/bottle_0333_normalised
+bottle/train/bottle_0235_normalised
+bottle/train/bottle_0332_normalised
+bottle/train/bottle_0120_normalised
+bottle/train/bottle_0256_normalised
+bottle/train/bottle_0331_normalised
+bottle/train/bottle_0166_normalised
+bottle/train/bottle_0134_normalised
+bottle/train/bottle_0253_normalised
+bottle/train/bottle_0203_normalised
+bottle/train/bottle_0096_normalised
+bottle/train/bottle_0043_normalised
+bottle/train/bottle_0079_normalised
+bottle/train/bottle_0013_normalised
+bottle/train/bottle_0295_normalised
+bottle/train/bottle_0287_normalised
+bottle/train/bottle_0177_normalised
+bottle/train/bottle_0219_normalised
+bottle/train/bottle_0264_normalised
+bottle/train/bottle_0266_normalised
+bottle/train/bottle_0310_normalised
+bottle/train/bottle_0183_normalised
+bottle/train/bottle_0214_normalised
+bottle/train/bottle_0229_normalised
+bottle/train/bottle_0007_normalised
+bottle/train/bottle_0273_normalised
+bottle/train/bottle_0180_normalised
+bottle/train/bottle_0189_normalised
+bottle/train/bottle_0095_normalised
+bottle/train/bottle_0207_normalised
+bottle/train/bottle_0278_normalised
+bottle/train/bottle_0123_normalised
+bottle/train/bottle_0085_normalised
+bottle/train/bottle_0170_normalised
+bottle/train/bottle_0242_normalised
+bottle/train/bottle_0237_normalised
+bottle/train/bottle_0092_normalised
+bottle/train/bottle_0251_normalised
+bottle/train/bottle_0246_normalised
+bottle/train/bottle_0330_normalised
+bottle/train/bottle_0027_normalised
+bottle/train/bottle_0152_normalised
+bottle/train/bottle_0212_normalised
+bottle/train/bottle_0014_normalised
+bottle/train/bottle_0115_normalised
+bottle/train/bottle_0088_normalised
+bottle/train/bottle_0058_normalised
+bottle/train/bottle_0291_normalised
+bottle/train/bottle_0265_normalised
+bottle/train/bottle_0296_normalised
+bottle/train/bottle_0281_normalised
+bottle/train/bottle_0097_normalised
+bottle/train/bottle_0103_normalised
+bottle/train/bottle_0046_normalised
+bottle/train/bottle_0305_normalised
+bottle/train/bottle_0271_normalised
+bottle/train/bottle_0009_normalised
+bottle/train/bottle_0304_normalised
+bottle/train/bottle_0121_normalised
+bottle/train/bottle_0303_normalised
+bottle/train/bottle_0044_normalised
+bottle/train/bottle_0108_normalised
+bottle/train/bottle_0163_normalised
+bottle/train/bottle_0241_normalised
+bottle/train/bottle_0148_normalised
+bottle/train/bottle_0149_normalised
+bottle/train/bottle_0032_normalised
+bottle/train/bottle_0293_normalised
+bottle/train/bottle_0069_normalised
+bottle/train/bottle_0105_normalised
+bottle/train/bottle_0258_normalised
+bottle/train/bottle_0100_normalised
+bottle/train/bottle_0322_normalised
+bottle/train/bottle_0062_normalised
+bottle/train/bottle_0277_normalised
+bottle/train/bottle_0209_normalised
+bottle/train/bottle_0231_normalised
+bottle/train/bottle_0182_normalised
+bottle/train/bottle_0201_normalised
+bottle/train/bottle_0018_normalised
+bottle/train/bottle_0107_normalised
+bottle/train/bottle_0323_normalised
+bottle/train/bottle_0071_normalised
+bottle/train/bottle_0004_normalised
+bottle/train/bottle_0167_normalised
+bottle/train/bottle_0228_normalised
+bottle/train/bottle_0057_normalised
+bottle/train/bottle_0116_normalised
+bottle/train/bottle_0035_normalised
+bottle/train/bottle_0118_normalised
+bottle/train/bottle_0131_normalised
+bottle/train/bottle_0024_normalised
+bottle/train/bottle_0283_normalised
+bottle/train/bottle_0133_normalised
+bottle/train/bottle_0335_normalised
+bottle/train/bottle_0084_normalised
+bottle/train/bottle_0260_normalised
+bottle/train/bottle_0060_normalised
+bottle/train/bottle_0065_normalised
+bottle/train/bottle_0284_normalised
+bottle/train/bottle_0155_normalised
+bottle/train/bottle_0110_normalised
+bottle/train/bottle_0248_normalised
+bottle/train/bottle_0244_normalised
+bottle/train/bottle_0226_normalised
+bottle/train/bottle_0143_normalised
+bottle/train/bottle_0222_normalised
+bottle/train/bottle_0139_normalised
+bottle/train/bottle_0176_normalised
+bottle/train/bottle_0301_normalised
+bottle/train/bottle_0070_normalised
+bottle/train/bottle_0206_normalised
+bottle/train/bottle_0068_normalised
+bottle/train/bottle_0257_normalised
+bottle/train/bottle_0015_normalised
+bottle/train/bottle_0250_normalised
+bottle/train/bottle_0261_normalised
+bottle/train/bottle_0225_normalised
+bottle/train/bottle_0112_normalised
+bottle/train/bottle_0267_normalised
+bottle/train/bottle_0199_normalised
+bottle/train/bottle_0077_normalised
+bottle/train/bottle_0288_normalised
+bottle/train/bottle_0262_normalised
+bottle/train/bottle_0168_normalised
+bottle/train/bottle_0270_normalised
+bottle/train/bottle_0200_normalised
+bottle/train/bottle_0252_normalised
+bottle/train/bottle_0001_normalised
+bottle/train/bottle_0243_normalised
+bottle/train/bottle_0127_normalised
+bottle/train/bottle_0236_normalised
+bottle/train/bottle_0210_normalised
+bottle/train/bottle_0169_normalised
+bottle/train/bottle_0268_normalised
+bottle/train/bottle_0072_normalised
+bottle/train/bottle_0274_normalised
+bottle/train/bottle_0151_normalised
+bottle/train/bottle_0320_normalised
+bottle/train/bottle_0285_normalised
+bottle/train/bottle_0145_normalised
+bottle/train/bottle_0093_normalised
+bottle/train/bottle_0003_normalised
+bottle/train/bottle_0146_normalised
+bottle/train/bottle_0117_normalised
+bottle/train/bottle_0179_normalised
+bottle/train/bottle_0317_normalised
+bottle/train/bottle_0061_normalised
+bottle/train/bottle_0185_normalised
+bottle/train/bottle_0075_normalised
+bottle/train/bottle_0308_normalised
+bottle/train/bottle_0083_normalised
+bottle/train/bottle_0175_normalised
+bottle/train/bottle_0129_normalised
+bottle/train/bottle_0205_normalised
+bottle/train/bottle_0220_normalised
+bottle/train/bottle_0196_normalised
+bottle/train/bottle_0276_normalised
+bottle/train/bottle_0188_normalised
+bottle/train/bottle_0049_normalised
+bottle/train/bottle_0021_normalised
+bottle/train/bottle_0130_normalised
+bottle/train/bottle_0202_normalised
+bottle/train/bottle_0315_normalised
+bottle/train/bottle_0132_normalised
+bottle/train/bottle_0050_normalised
+bottle/train/bottle_0198_normalised
+bottle/train/bottle_0081_normalised
+bottle/train/bottle_0156_normalised
+bottle/train/bottle_0221_normalised
+bottle/train/bottle_0190_normalised
+bottle/train/bottle_0089_normalised
+bottle/train/bottle_0269_normalised
+bottle/train/bottle_0334_normalised
+bottle/train/bottle_0114_normalised
+bottle/train/bottle_0106_normalised
+bottle/train/bottle_0158_normalised
+bottle/train/bottle_0254_normalised
+bottle/train/bottle_0307_normalised
+bottle/train/bottle_0160_normalised
+bottle/train/bottle_0161_normalised
+bottle/train/bottle_0030_normalised
+bottle/train/bottle_0138_normalised
+bottle/train/bottle_0064_normalised
+bottle/train/bottle_0224_normalised
+bottle/train/bottle_0038_normalised
+bottle/train/bottle_0211_normalised
+bottle/train/bottle_0172_normalised
+bottle/train/bottle_0047_normalised
+bottle/train/bottle_0321_normalised
+bottle/train/bottle_0263_normalised
+bottle/train/bottle_0017_normalised
+bottle/train/bottle_0311_normalised
+bottle/train/bottle_0197_normalised
+bottle/train/bottle_0094_normalised
+bottle/train/bottle_0039_normalised
+bottle/train/bottle_0022_normalised
+bottle/train/bottle_0019_normalised
+bottle/train/bottle_0150_normalised
+bottle/train/bottle_0109_normalised
+bottle/train/bottle_0157_normalised
+bottle/train/bottle_0099_normalised
+bottle/train/bottle_0141_normalised
+bottle/train/bottle_0204_normalised
+bottle/train/bottle_0192_normalised
+bottle/train/bottle_0029_normalised
+bottle/train/bottle_0128_normalised
+bottle/train/bottle_0074_normalised
+bottle/train/bottle_0230_normalised
+bottle/train/bottle_0045_normalised
+bottle/train/bottle_0327_normalised
+bottle/train/bottle_0329_normalised
+bottle/train/bottle_0319_normalised
+bottle/train/bottle_0286_normalised
+bottle/train/bottle_0147_normalised
+bottle/train/bottle_0054_normalised
+bottle/train/bottle_0318_normalised
+bottle/train/bottle_0249_normalised
+bottle/train/bottle_0008_normalised
+bottle/train/bottle_0137_normalised
+bottle/train/bottle_0037_normalised
+bottle/train/bottle_0233_normalised
+bottle/train/bottle_0259_normalised
+bottle/train/bottle_0324_normalised
+bottle/train/bottle_0053_normalised
+bottle/train/bottle_0016_normalised
+bottle/train/bottle_0078_normalised
+bottle/train/bottle_0215_normalised
+bottle/train/bottle_0010_normalised
+bottle/train/bottle_0119_normalised
+bottle/train/bottle_0297_normalised
+bottle/train/bottle_0162_normalised
+bottle/train/bottle_0184_normalised
+bottle/train/bottle_0208_normalised
+bottle/train/bottle_0006_normalised
+bottle/train/bottle_0063_normalised
+bottle/train/bottle_0195_normalised
+bottle/train/bottle_0290_normalised
+bottle/train/bottle_0082_normalised
+bottle/train/bottle_0181_normalised
+bottle/train/bottle_0056_normalised
+bottle/train/bottle_0048_normalised
+bottle/train/bottle_0275_normalised
+bottle/train/bottle_0313_normalised
+bottle/train/bottle_0055_normalised
+bottle/train/bottle_0040_normalised
+bottle/train/bottle_0240_normalised
+bottle/train/bottle_0191_normalised
+bottle/train/bottle_0300_normalised
+bottle/train/bottle_0316_normalised
+bottle/train/bottle_0217_normalised
+bottle/train/bottle_0328_normalised
+bottle/train/bottle_0090_normalised
+bottle/train/bottle_0073_normalised
+bottle/train/bottle_0218_normalised
+bottle/train/bottle_0159_normalised
+bottle/train/bottle_0154_normalised
+bottle/train/bottle_0247_normalised
+bottle/train/bottle_0140_normalised
+bottle/train/bottle_0174_normalised
+bottle/train/bottle_0238_normalised
+bottle/train/bottle_0216_normalised
+bottle/train/bottle_0234_normalised
+bottle/train/bottle_0144_normalised
+bottle/train/bottle_0282_normalised
+bottle/train/bottle_0002_normalised
+bottle/train/bottle_0086_normalised
+bottle/train/bottle_0066_normalised
+bottle/train/bottle_0326_normalised
+bottle/train/bottle_0023_normalised
+bottle/train/bottle_0153_normalised
+bottle/train/bottle_0041_normalised
+bottle/train/bottle_0165_normalised
+bottle/train/bottle_0135_normalised
+bottle/train/bottle_0052_normalised
+bottle/train/bottle_0076_normalised
+bottle/train/bottle_0239_normalised
+bottle/train/bottle_0312_normalised
+bottle/train/bottle_0122_normalised
+bottle/train/bottle_0292_normalised
+bottle/train/bottle_0227_normalised
+bottle/train/bottle_0091_normalised
+bottle/train/bottle_0325_normalised
+bottle/train/bottle_0124_normalised
+bottle/train/bottle_0101_normalised
+bottle/train/bottle_0125_normalised
+bottle/train/bottle_0034_normalised
+bottle/train/bottle_0136_normalised
+bottle/train/bottle_0164_normalised
+bottle/train/bottle_0171_normalised
+bottle/train/bottle_0031_normalised
+bottle/train/bottle_0298_normalised
+bottle/train/bottle_0067_normalised
+bottle/train/bottle_0025_normalised
+bottle/train/bottle_0098_normalised
+bottle/train/bottle_0193_normalised
+bottle/train/bottle_0059_normalised
+bottle/train/bottle_0272_normalised
+bottle/train/bottle_0232_normalised
+bottle/train/bottle_0279_normalised
+bottle/train/bottle_0126_normalised
+bottle/train/bottle_0102_normalised
+bottle/train/bottle_0294_normalised
+bottle/train/bottle_0142_normalised
+bottle/train/bottle_0223_normalised
+bottle/train/bottle_0289_normalised
+bottle/train/bottle_0280_normalised
+bottle/train/bottle_0104_normalised
+bottle/train/bottle_0186_normalised
+bottle/train/bottle_0255_normalised
+bottle/train/bottle_0306_normalised
+bottle/train/bottle_0020_normalised
+bottle/train/bottle_0036_normalised
+bottle/train/bottle_0187_normalised
+bottle/train/bottle_0080_normalised
+bottle/test/bottle_0409_normalised
+bottle/test/bottle_0370_normalised
+bottle/test/bottle_0417_normalised
+bottle/test/bottle_0419_normalised
+bottle/test/bottle_0429_normalised
+bottle/test/bottle_0416_normalised
+bottle/test/bottle_0403_normalised
+bottle/test/bottle_0354_normalised
+bottle/test/bottle_0352_normalised
+bottle/test/bottle_0382_normalised
+bottle/test/bottle_0345_normalised
+bottle/test/bottle_0394_normalised
+bottle/test/bottle_0427_normalised
+bottle/test/bottle_0435_normalised
+bottle/test/bottle_0405_normalised
+bottle/test/bottle_0393_normalised
+bottle/test/bottle_0366_normalised
+bottle/test/bottle_0359_normalised
+bottle/test/bottle_0428_normalised
+bottle/test/bottle_0399_normalised
+bottle/test/bottle_0385_normalised
+bottle/test/bottle_0411_normalised
+bottle/test/bottle_0367_normalised
+bottle/test/bottle_0364_normalised
+bottle/test/bottle_0406_normalised
+bottle/test/bottle_0357_normalised
+bottle/test/bottle_0356_normalised
+bottle/test/bottle_0338_normalised
+bottle/test/bottle_0358_normalised
+bottle/test/bottle_0362_normalised
+bottle/test/bottle_0424_normalised
+bottle/test/bottle_0368_normalised
+bottle/test/bottle_0422_normalised
+bottle/test/bottle_0426_normalised
+bottle/test/bottle_0342_normalised
+bottle/test/bottle_0408_normalised
+bottle/test/bottle_0337_normalised
+bottle/test/bottle_0412_normalised
+bottle/test/bottle_0355_normalised
+bottle/test/bottle_0353_normalised
+bottle/test/bottle_0363_normalised
+bottle/test/bottle_0400_normalised
+bottle/test/bottle_0420_normalised
+bottle/test/bottle_0395_normalised
+bottle/test/bottle_0388_normalised
+bottle/test/bottle_0351_normalised
+bottle/test/bottle_0340_normalised
+bottle/test/bottle_0365_normalised
+bottle/test/bottle_0361_normalised
+bottle/test/bottle_0407_normalised
+bottle/test/bottle_0423_normalised
+bottle/test/bottle_0344_normalised
+bottle/test/bottle_0346_normalised
+bottle/test/bottle_0383_normalised
+bottle/test/bottle_0425_normalised
+bottle/test/bottle_0339_normalised
+bottle/test/bottle_0386_normalised
+bottle/test/bottle_0415_normalised
+bottle/test/bottle_0433_normalised
+bottle/test/bottle_0392_normalised
+bottle/test/bottle_0432_normalised
+bottle/test/bottle_0414_normalised
+bottle/test/bottle_0397_normalised
+bottle/test/bottle_0396_normalised
+bottle/test/bottle_0373_normalised
+bottle/test/bottle_0343_normalised
+bottle/test/bottle_0379_normalised
+bottle/test/bottle_0350_normalised
+bottle/test/bottle_0401_normalised
+bottle/test/bottle_0336_normalised
+bottle/test/bottle_0390_normalised
+bottle/test/bottle_0347_normalised
+bottle/test/bottle_0374_normalised
+bottle/test/bottle_0349_normalised
+bottle/test/bottle_0387_normalised
+bottle/test/bottle_0421_normalised
+bottle/test/bottle_0380_normalised
+bottle/test/bottle_0398_normalised
+bottle/test/bottle_0375_normalised
+bottle/test/bottle_0434_normalised
+bottle/test/bottle_0391_normalised
+bottle/test/bottle_0372_normalised
+bottle/test/bottle_0384_normalised
+bottle/test/bottle_0341_normalised
+bottle/test/bottle_0404_normalised
+bottle/test/bottle_0389_normalised
+bottle/test/bottle_0378_normalised
+bottle/test/bottle_0376_normalised
+bottle/test/bottle_0369_normalised
+bottle/test/bottle_0371_normalised
+bottle/test/bottle_0431_normalised
+bottle/test/bottle_0410_normalised
+bottle/test/bottle_0360_normalised
+bottle/test/bottle_0418_normalised
+bottle/test/bottle_0377_normalised
+bottle/test/bottle_0381_normalised
+bottle/test/bottle_0413_normalised
+bottle/test/bottle_0402_normalised
+bottle/test/bottle_0348_normalised
+bottle/test/bottle_0430_normalised
+tv_stand/train/tv_stand_0004_normalised
+tv_stand/train/tv_stand_0041_normalised
+tv_stand/train/tv_stand_0038_normalised
+tv_stand/train/tv_stand_0243_normalised
+tv_stand/train/tv_stand_0034_normalised
+tv_stand/train/tv_stand_0024_normalised
+tv_stand/train/tv_stand_0126_normalised
+tv_stand/train/tv_stand_0199_normalised
+tv_stand/train/tv_stand_0085_normalised
+tv_stand/train/tv_stand_0147_normalised
+tv_stand/train/tv_stand_0229_normalised
+tv_stand/train/tv_stand_0037_normalised
+tv_stand/train/tv_stand_0011_normalised
+tv_stand/train/tv_stand_0091_normalised
+tv_stand/train/tv_stand_0042_normalised
+tv_stand/train/tv_stand_0093_normalised
+tv_stand/train/tv_stand_0063_normalised
+tv_stand/train/tv_stand_0036_normalised
+tv_stand/train/tv_stand_0226_normalised
+tv_stand/train/tv_stand_0010_normalised
+tv_stand/train/tv_stand_0066_normalised
+tv_stand/train/tv_stand_0206_normalised
+tv_stand/train/tv_stand_0221_normalised
+tv_stand/train/tv_stand_0022_normalised
+tv_stand/train/tv_stand_0201_normalised
+tv_stand/train/tv_stand_0258_normalised
+tv_stand/train/tv_stand_0084_normalised
+tv_stand/train/tv_stand_0152_normalised
+tv_stand/train/tv_stand_0111_normalised
+tv_stand/train/tv_stand_0160_normalised
+tv_stand/train/tv_stand_0252_normalised
+tv_stand/train/tv_stand_0067_normalised
+tv_stand/train/tv_stand_0033_normalised
+tv_stand/train/tv_stand_0029_normalised
+tv_stand/train/tv_stand_0154_normalised
+tv_stand/train/tv_stand_0060_normalised
+tv_stand/train/tv_stand_0241_normalised
+tv_stand/train/tv_stand_0122_normalised
+tv_stand/train/tv_stand_0035_normalised
+tv_stand/train/tv_stand_0108_normalised
+tv_stand/train/tv_stand_0074_normalised
+tv_stand/train/tv_stand_0103_normalised
+tv_stand/train/tv_stand_0005_normalised
+tv_stand/train/tv_stand_0148_normalised
+tv_stand/train/tv_stand_0064_normalised
+tv_stand/train/tv_stand_0247_normalised
+tv_stand/train/tv_stand_0145_normalised
+tv_stand/train/tv_stand_0259_normalised
+tv_stand/train/tv_stand_0039_normalised
+tv_stand/train/tv_stand_0129_normalised
+tv_stand/train/tv_stand_0032_normalised
+tv_stand/train/tv_stand_0150_normalised
+tv_stand/train/tv_stand_0204_normalised
+tv_stand/train/tv_stand_0052_normalised
+tv_stand/train/tv_stand_0089_normalised
+tv_stand/train/tv_stand_0244_normalised
+tv_stand/train/tv_stand_0055_normalised
+tv_stand/train/tv_stand_0139_normalised
+tv_stand/train/tv_stand_0138_normalised
+tv_stand/train/tv_stand_0128_normalised
+tv_stand/train/tv_stand_0133_normalised
+tv_stand/train/tv_stand_0257_normalised
+tv_stand/train/tv_stand_0070_normalised
+tv_stand/train/tv_stand_0162_normalised
+tv_stand/train/tv_stand_0188_normalised
+tv_stand/train/tv_stand_0230_normalised
+tv_stand/train/tv_stand_0105_normalised
+tv_stand/train/tv_stand_0179_normalised
+tv_stand/train/tv_stand_0249_normalised
+tv_stand/train/tv_stand_0140_normalised
+tv_stand/train/tv_stand_0009_normalised
+tv_stand/train/tv_stand_0015_normalised
+tv_stand/train/tv_stand_0116_normalised
+tv_stand/train/tv_stand_0196_normalised
+tv_stand/train/tv_stand_0159_normalised
+tv_stand/train/tv_stand_0131_normalised
+tv_stand/train/tv_stand_0118_normalised
+tv_stand/train/tv_stand_0180_normalised
+tv_stand/train/tv_stand_0231_normalised
+tv_stand/train/tv_stand_0027_normalised
+tv_stand/train/tv_stand_0068_normalised
+tv_stand/train/tv_stand_0113_normalised
+tv_stand/train/tv_stand_0242_normalised
+tv_stand/train/tv_stand_0237_normalised
+tv_stand/train/tv_stand_0053_normalised
+tv_stand/train/tv_stand_0031_normalised
+tv_stand/train/tv_stand_0001_normalised
+tv_stand/train/tv_stand_0267_normalised
+tv_stand/train/tv_stand_0182_normalised
+tv_stand/train/tv_stand_0219_normalised
+tv_stand/train/tv_stand_0026_normalised
+tv_stand/train/tv_stand_0143_normalised
+tv_stand/train/tv_stand_0209_normalised
+tv_stand/train/tv_stand_0216_normalised
+tv_stand/train/tv_stand_0239_normalised
+tv_stand/train/tv_stand_0245_normalised
+tv_stand/train/tv_stand_0260_normalised
+tv_stand/train/tv_stand_0210_normalised
+tv_stand/train/tv_stand_0048_normalised
+tv_stand/train/tv_stand_0059_normalised
+tv_stand/train/tv_stand_0264_normalised
+tv_stand/train/tv_stand_0213_normalised
+tv_stand/train/tv_stand_0170_normalised
+tv_stand/train/tv_stand_0106_normalised
+tv_stand/train/tv_stand_0175_normalised
+tv_stand/train/tv_stand_0082_normalised
+tv_stand/train/tv_stand_0049_normalised
+tv_stand/train/tv_stand_0194_normalised
+tv_stand/train/tv_stand_0200_normalised
+tv_stand/train/tv_stand_0023_normalised
+tv_stand/train/tv_stand_0110_normalised
+tv_stand/train/tv_stand_0078_normalised
+tv_stand/train/tv_stand_0090_normalised
+tv_stand/train/tv_stand_0232_normalised
+tv_stand/train/tv_stand_0030_normalised
+tv_stand/train/tv_stand_0142_normalised
+tv_stand/train/tv_stand_0255_normalised
+tv_stand/train/tv_stand_0212_normalised
+tv_stand/train/tv_stand_0061_normalised
+tv_stand/train/tv_stand_0007_normalised
+tv_stand/train/tv_stand_0050_normalised
+tv_stand/train/tv_stand_0130_normalised
+tv_stand/train/tv_stand_0065_normalised
+tv_stand/train/tv_stand_0207_normalised
+tv_stand/train/tv_stand_0202_normalised
+tv_stand/train/tv_stand_0087_normalised
+tv_stand/train/tv_stand_0197_normalised
+tv_stand/train/tv_stand_0043_normalised
+tv_stand/train/tv_stand_0236_normalised
+tv_stand/train/tv_stand_0171_normalised
+tv_stand/train/tv_stand_0102_normalised
+tv_stand/train/tv_stand_0114_normalised
+tv_stand/train/tv_stand_0190_normalised
+tv_stand/train/tv_stand_0261_normalised
+tv_stand/train/tv_stand_0168_normalised
+tv_stand/train/tv_stand_0228_normalised
+tv_stand/train/tv_stand_0079_normalised
+tv_stand/train/tv_stand_0136_normalised
+tv_stand/train/tv_stand_0018_normalised
+tv_stand/train/tv_stand_0176_normalised
+tv_stand/train/tv_stand_0156_normalised
+tv_stand/train/tv_stand_0020_normalised
+tv_stand/train/tv_stand_0092_normalised
+tv_stand/train/tv_stand_0189_normalised
+tv_stand/train/tv_stand_0246_normalised
+tv_stand/train/tv_stand_0017_normalised
+tv_stand/train/tv_stand_0262_normalised
+tv_stand/train/tv_stand_0137_normalised
+tv_stand/train/tv_stand_0238_normalised
+tv_stand/train/tv_stand_0161_normalised
+tv_stand/train/tv_stand_0123_normalised
+tv_stand/train/tv_stand_0251_normalised
+tv_stand/train/tv_stand_0191_normalised
+tv_stand/train/tv_stand_0071_normalised
+tv_stand/train/tv_stand_0253_normalised
+tv_stand/train/tv_stand_0040_normalised
+tv_stand/train/tv_stand_0134_normalised
+tv_stand/train/tv_stand_0235_normalised
+tv_stand/train/tv_stand_0220_normalised
+tv_stand/train/tv_stand_0028_normalised
+tv_stand/train/tv_stand_0127_normalised
+tv_stand/train/tv_stand_0164_normalised
+tv_stand/train/tv_stand_0240_normalised
+tv_stand/train/tv_stand_0178_normalised
+tv_stand/train/tv_stand_0121_normalised
+tv_stand/train/tv_stand_0076_normalised
+tv_stand/train/tv_stand_0119_normalised
+tv_stand/train/tv_stand_0124_normalised
+tv_stand/train/tv_stand_0144_normalised
+tv_stand/train/tv_stand_0073_normalised
+tv_stand/train/tv_stand_0167_normalised
+tv_stand/train/tv_stand_0157_normalised
+tv_stand/train/tv_stand_0205_normalised
+tv_stand/train/tv_stand_0222_normalised
+tv_stand/train/tv_stand_0198_normalised
+tv_stand/train/tv_stand_0115_normalised
+tv_stand/train/tv_stand_0155_normalised
+tv_stand/train/tv_stand_0225_normalised
+tv_stand/train/tv_stand_0094_normalised
+tv_stand/train/tv_stand_0072_normalised
+tv_stand/train/tv_stand_0254_normalised
+tv_stand/train/tv_stand_0265_normalised
+tv_stand/train/tv_stand_0256_normalised
+tv_stand/train/tv_stand_0016_normalised
+tv_stand/train/tv_stand_0069_normalised
+tv_stand/train/tv_stand_0051_normalised
+tv_stand/train/tv_stand_0013_normalised
+tv_stand/train/tv_stand_0096_normalised
+tv_stand/train/tv_stand_0135_normalised
+tv_stand/train/tv_stand_0203_normalised
+tv_stand/train/tv_stand_0169_normalised
+tv_stand/train/tv_stand_0233_normalised
+tv_stand/train/tv_stand_0104_normalised
+tv_stand/train/tv_stand_0006_normalised
+tv_stand/train/tv_stand_0248_normalised
+tv_stand/train/tv_stand_0056_normalised
+tv_stand/train/tv_stand_0218_normalised
+tv_stand/train/tv_stand_0003_normalised
+tv_stand/train/tv_stand_0014_normalised
+tv_stand/train/tv_stand_0158_normalised
+tv_stand/train/tv_stand_0083_normalised
+tv_stand/train/tv_stand_0058_normalised
+tv_stand/train/tv_stand_0095_normalised
+tv_stand/train/tv_stand_0062_normalised
+tv_stand/train/tv_stand_0099_normalised
+tv_stand/train/tv_stand_0012_normalised
+tv_stand/train/tv_stand_0263_normalised
+tv_stand/train/tv_stand_0174_normalised
+tv_stand/train/tv_stand_0166_normalised
+tv_stand/train/tv_stand_0223_normalised
+tv_stand/train/tv_stand_0224_normalised
+tv_stand/train/tv_stand_0192_normalised
+tv_stand/train/tv_stand_0109_normalised
+tv_stand/train/tv_stand_0193_normalised
+tv_stand/train/tv_stand_0184_normalised
+tv_stand/train/tv_stand_0044_normalised
+tv_stand/train/tv_stand_0021_normalised
+tv_stand/train/tv_stand_0151_normalised
+tv_stand/train/tv_stand_0195_normalised
+tv_stand/train/tv_stand_0165_normalised
+tv_stand/train/tv_stand_0107_normalised
+tv_stand/train/tv_stand_0057_normalised
+tv_stand/train/tv_stand_0177_normalised
+tv_stand/train/tv_stand_0217_normalised
+tv_stand/train/tv_stand_0208_normalised
+tv_stand/train/tv_stand_0046_normalised
+tv_stand/train/tv_stand_0101_normalised
+tv_stand/train/tv_stand_0153_normalised
+tv_stand/train/tv_stand_0081_normalised
+tv_stand/train/tv_stand_0146_normalised
+tv_stand/train/tv_stand_0149_normalised
+tv_stand/train/tv_stand_0132_normalised
+tv_stand/train/tv_stand_0214_normalised
+tv_stand/train/tv_stand_0266_normalised
+tv_stand/train/tv_stand_0097_normalised
+tv_stand/train/tv_stand_0112_normalised
+tv_stand/train/tv_stand_0002_normalised
+tv_stand/train/tv_stand_0120_normalised
+tv_stand/train/tv_stand_0054_normalised
+tv_stand/train/tv_stand_0047_normalised
+tv_stand/train/tv_stand_0125_normalised
+tv_stand/train/tv_stand_0187_normalised
+tv_stand/train/tv_stand_0185_normalised
+tv_stand/train/tv_stand_0025_normalised
+tv_stand/train/tv_stand_0186_normalised
+tv_stand/train/tv_stand_0098_normalised
+tv_stand/train/tv_stand_0172_normalised
+tv_stand/train/tv_stand_0234_normalised
+tv_stand/train/tv_stand_0019_normalised
+tv_stand/train/tv_stand_0075_normalised
+tv_stand/train/tv_stand_0045_normalised
+tv_stand/train/tv_stand_0141_normalised
+tv_stand/train/tv_stand_0183_normalised
+tv_stand/train/tv_stand_0080_normalised
+tv_stand/train/tv_stand_0117_normalised
+tv_stand/train/tv_stand_0211_normalised
+tv_stand/train/tv_stand_0215_normalised
+tv_stand/train/tv_stand_0008_normalised
+tv_stand/train/tv_stand_0100_normalised
+tv_stand/train/tv_stand_0250_normalised
+tv_stand/train/tv_stand_0181_normalised
+tv_stand/train/tv_stand_0086_normalised
+tv_stand/train/tv_stand_0173_normalised
+tv_stand/train/tv_stand_0077_normalised
+tv_stand/train/tv_stand_0163_normalised
+tv_stand/train/tv_stand_0227_normalised
+tv_stand/train/tv_stand_0088_normalised
+tv_stand/test/tv_stand_0319_normalised
+tv_stand/test/tv_stand_0356_normalised
+tv_stand/test/tv_stand_0367_normalised
+tv_stand/test/tv_stand_0332_normalised
+tv_stand/test/tv_stand_0365_normalised
+tv_stand/test/tv_stand_0311_normalised
+tv_stand/test/tv_stand_0285_normalised
+tv_stand/test/tv_stand_0361_normalised
+tv_stand/test/tv_stand_0289_normalised
+tv_stand/test/tv_stand_0271_normalised
+tv_stand/test/tv_stand_0312_normalised
+tv_stand/test/tv_stand_0278_normalised
+tv_stand/test/tv_stand_0355_normalised
+tv_stand/test/tv_stand_0317_normalised
+tv_stand/test/tv_stand_0338_normalised
+tv_stand/test/tv_stand_0287_normalised
+tv_stand/test/tv_stand_0321_normalised
+tv_stand/test/tv_stand_0346_normalised
+tv_stand/test/tv_stand_0349_normalised
+tv_stand/test/tv_stand_0337_normalised
+tv_stand/test/tv_stand_0300_normalised
+tv_stand/test/tv_stand_0353_normalised
+tv_stand/test/tv_stand_0327_normalised
+tv_stand/test/tv_stand_0292_normalised
+tv_stand/test/tv_stand_0291_normalised
+tv_stand/test/tv_stand_0324_normalised
+tv_stand/test/tv_stand_0308_normalised
+tv_stand/test/tv_stand_0340_normalised
+tv_stand/test/tv_stand_0273_normalised
+tv_stand/test/tv_stand_0315_normalised
+tv_stand/test/tv_stand_0279_normalised
+tv_stand/test/tv_stand_0360_normalised
+tv_stand/test/tv_stand_0296_normalised
+tv_stand/test/tv_stand_0283_normalised
+tv_stand/test/tv_stand_0364_normalised
+tv_stand/test/tv_stand_0299_normalised
+tv_stand/test/tv_stand_0334_normalised
+tv_stand/test/tv_stand_0347_normalised
+tv_stand/test/tv_stand_0363_normalised
+tv_stand/test/tv_stand_0366_normalised
+tv_stand/test/tv_stand_0352_normalised
+tv_stand/test/tv_stand_0343_normalised
+tv_stand/test/tv_stand_0294_normalised
+tv_stand/test/tv_stand_0303_normalised
+tv_stand/test/tv_stand_0330_normalised
+tv_stand/test/tv_stand_0286_normalised
+tv_stand/test/tv_stand_0357_normalised
+tv_stand/test/tv_stand_0301_normalised
+tv_stand/test/tv_stand_0351_normalised
+tv_stand/test/tv_stand_0276_normalised
+tv_stand/test/tv_stand_0280_normalised
+tv_stand/test/tv_stand_0302_normalised
+tv_stand/test/tv_stand_0322_normalised
+tv_stand/test/tv_stand_0341_normalised
+tv_stand/test/tv_stand_0306_normalised
+tv_stand/test/tv_stand_0270_normalised
+tv_stand/test/tv_stand_0359_normalised
+tv_stand/test/tv_stand_0333_normalised
+tv_stand/test/tv_stand_0342_normalised
+tv_stand/test/tv_stand_0336_normalised
+tv_stand/test/tv_stand_0358_normalised
+tv_stand/test/tv_stand_0295_normalised
+tv_stand/test/tv_stand_0326_normalised
+tv_stand/test/tv_stand_0268_normalised
+tv_stand/test/tv_stand_0329_normalised
+tv_stand/test/tv_stand_0284_normalised
+tv_stand/test/tv_stand_0335_normalised
+tv_stand/test/tv_stand_0328_normalised
+tv_stand/test/tv_stand_0277_normalised
+tv_stand/test/tv_stand_0309_normalised
+tv_stand/test/tv_stand_0293_normalised
+tv_stand/test/tv_stand_0275_normalised
+tv_stand/test/tv_stand_0290_normalised
+tv_stand/test/tv_stand_0344_normalised
+tv_stand/test/tv_stand_0331_normalised
+tv_stand/test/tv_stand_0350_normalised
+tv_stand/test/tv_stand_0320_normalised
+tv_stand/test/tv_stand_0310_normalised
+tv_stand/test/tv_stand_0282_normalised
+tv_stand/test/tv_stand_0304_normalised
+tv_stand/test/tv_stand_0325_normalised
+tv_stand/test/tv_stand_0348_normalised
+tv_stand/test/tv_stand_0269_normalised
+tv_stand/test/tv_stand_0314_normalised
+tv_stand/test/tv_stand_0272_normalised
+tv_stand/test/tv_stand_0339_normalised
+tv_stand/test/tv_stand_0345_normalised
+tv_stand/test/tv_stand_0281_normalised
+tv_stand/test/tv_stand_0316_normalised
+tv_stand/test/tv_stand_0298_normalised
+tv_stand/test/tv_stand_0297_normalised
+tv_stand/test/tv_stand_0362_normalised
+tv_stand/test/tv_stand_0318_normalised
+tv_stand/test/tv_stand_0313_normalised
+tv_stand/test/tv_stand_0354_normalised
+tv_stand/test/tv_stand_0288_normalised
+tv_stand/test/tv_stand_0274_normalised
+tv_stand/test/tv_stand_0305_normalised
+tv_stand/test/tv_stand_0323_normalised
+tv_stand/test/tv_stand_0307_normalised
+table/train/table_0066_normalised
+table/train/table_0153_normalised
+table/train/table_0025_normalised
+table/train/table_0036_normalised
+table/train/table_0352_normalised
+table/train/table_0108_normalised
+table/train/table_0079_normalised
+table/train/table_0237_normalised
+table/train/table_0317_normalised
+table/train/table_0004_normalised
+table/train/table_0194_normalised
+table/train/table_0034_normalised
+table/train/table_0249_normalised
+table/train/table_0076_normalised
+table/train/table_0099_normalised
+table/train/table_0125_normalised
+table/train/table_0152_normalised
+table/train/table_0176_normalised
+table/train/table_0113_normalised
+table/train/table_0013_normalised
+table/train/table_0014_normalised
+table/train/table_0118_normalised
+table/train/table_0286_normalised
+table/train/table_0244_normalised
+table/train/table_0021_normalised
+table/train/table_0010_normalised
+table/train/table_0180_normalised
+table/train/table_0229_normalised
+table/train/table_0327_normalised
+table/train/table_0151_normalised
+table/train/table_0082_normalised
+table/train/table_0379_normalised
+table/train/table_0220_normalised
+table/train/table_0306_normalised
+table/train/table_0044_normalised
+table/train/table_0215_normalised
+table/train/table_0030_normalised
+table/train/table_0336_normalised
+table/train/table_0052_normalised
+table/train/table_0050_normalised
+table/train/table_0310_normalised
+table/train/table_0123_normalised
+table/train/table_0390_normalised
+table/train/table_0294_normalised
+table/train/table_0247_normalised
+table/train/table_0209_normalised
+table/train/table_0345_normalised
+table/train/table_0100_normalised
+table/train/table_0109_normalised
+table/train/table_0027_normalised
+table/train/table_0155_normalised
+table/train/table_0264_normalised
+table/train/table_0245_normalised
+table/train/table_0190_normalised
+table/train/table_0283_normalised
+table/train/table_0383_normalised
+table/train/table_0232_normalised
+table/train/table_0046_normalised
+table/train/table_0159_normalised
+table/train/table_0362_normalised
+table/train/table_0234_normalised
+table/train/table_0095_normalised
+table/train/table_0150_normalised
+table/train/table_0199_normalised
+table/train/table_0041_normalised
+table/train/table_0083_normalised
+table/train/table_0131_normalised
+table/train/table_0260_normalised
+table/train/table_0226_normalised
+table/train/table_0331_normalised
+table/train/table_0035_normalised
+table/train/table_0056_normalised
+table/train/table_0334_normalised
+table/train/table_0037_normalised
+table/train/table_0333_normalised
+table/train/table_0356_normalised
+table/train/table_0387_normalised
+table/train/table_0110_normalised
+table/train/table_0015_normalised
+table/train/table_0078_normalised
+table/train/table_0179_normalised
+table/train/table_0139_normalised
+table/train/table_0224_normalised
+table/train/table_0240_normalised
+table/train/table_0307_normalised
+table/train/table_0341_normalised
+table/train/table_0028_normalised
+table/train/table_0295_normalised
+table/train/table_0376_normalised
+table/train/table_0068_normalised
+table/train/table_0329_normalised
+table/train/table_0289_normalised
+table/train/table_0111_normalised
+table/train/table_0385_normalised
+table/train/table_0342_normalised
+table/train/table_0162_normalised
+table/train/table_0276_normalised
+table/train/table_0177_normalised
+table/train/table_0026_normalised
+table/train/table_0322_normalised
+table/train/table_0257_normalised
+table/train/table_0060_normalised
+table/train/table_0184_normalised
+table/train/table_0114_normalised
+table/train/table_0018_normalised
+table/train/table_0236_normalised
+table/train/table_0091_normalised
+table/train/table_0282_normalised
+table/train/table_0221_normalised
+table/train/table_0338_normalised
+table/train/table_0122_normalised
+table/train/table_0092_normalised
+table/train/table_0389_normalised
+table/train/table_0364_normalised
+table/train/table_0381_normalised
+table/train/table_0116_normalised
+table/train/table_0169_normalised
+table/train/table_0273_normalised
+table/train/table_0168_normalised
+table/train/table_0378_normalised
+table/train/table_0085_normalised
+table/train/table_0185_normalised
+table/train/table_0272_normalised
+table/train/table_0073_normalised
+table/train/table_0243_normalised
+table/train/table_0228_normalised
+table/train/table_0373_normalised
+table/train/table_0261_normalised
+table/train/table_0370_normalised
+table/train/table_0170_normalised
+table/train/table_0024_normalised
+table/train/table_0368_normalised
+table/train/table_0391_normalised
+table/train/table_0204_normalised
+table/train/table_0182_normalised
+table/train/table_0011_normalised
+table/train/table_0192_normalised
+table/train/table_0112_normalised
+table/train/table_0313_normalised
+table/train/table_0163_normalised
+table/train/table_0344_normalised
+table/train/table_0297_normalised
+table/train/table_0369_normalised
+table/train/table_0157_normalised
+table/train/table_0323_normalised
+table/train/table_0262_normalised
+table/train/table_0256_normalised
+table/train/table_0315_normalised
+table/train/table_0360_normalised
+table/train/table_0254_normalised
+table/train/table_0102_normalised
+table/train/table_0316_normalised
+table/train/table_0203_normalised
+table/train/table_0219_normalised
+table/train/table_0175_normalised
+table/train/table_0324_normalised
+table/train/table_0202_normalised
+table/train/table_0055_normalised
+table/train/table_0218_normalised
+table/train/table_0259_normalised
+table/train/table_0075_normalised
+table/train/table_0339_normalised
+table/train/table_0002_normalised
+table/train/table_0127_normalised
+table/train/table_0107_normalised
+table/train/table_0140_normalised
+table/train/table_0012_normalised
+table/train/table_0216_normalised
+table/train/table_0119_normalised
+table/train/table_0263_normalised
+table/train/table_0097_normalised
+table/train/table_0222_normalised
+table/train/table_0070_normalised
+table/train/table_0019_normalised
+table/train/table_0089_normalised
+table/train/table_0359_normalised
+table/train/table_0049_normalised
+table/train/table_0255_normalised
+table/train/table_0128_normalised
+table/train/table_0217_normalised
+table/train/table_0388_normalised
+table/train/table_0296_normalised
+table/train/table_0250_normalised
+table/train/table_0009_normalised
+table/train/table_0207_normalised
+table/train/table_0214_normalised
+table/train/table_0136_normalised
+table/train/table_0308_normalised
+table/train/table_0382_normalised
+table/train/table_0268_normalised
+table/train/table_0074_normalised
+table/train/table_0016_normalised
+table/train/table_0129_normalised
+table/train/table_0158_normalised
+table/train/table_0267_normalised
+table/train/table_0300_normalised
+table/train/table_0156_normalised
+table/train/table_0281_normalised
+table/train/table_0301_normalised
+table/train/table_0183_normalised
+table/train/table_0366_normalised
+table/train/table_0134_normalised
+table/train/table_0374_normalised
+table/train/table_0290_normalised
+table/train/table_0274_normalised
+table/train/table_0246_normalised
+table/train/table_0059_normalised
+table/train/table_0380_normalised
+table/train/table_0251_normalised
+table/train/table_0332_normalised
+table/train/table_0293_normalised
+table/train/table_0130_normalised
+table/train/table_0042_normalised
+table/train/table_0285_normalised
+table/train/table_0354_normalised
+table/train/table_0053_normalised
+table/train/table_0233_normalised
+table/train/table_0124_normalised
+table/train/table_0343_normalised
+table/train/table_0069_normalised
+table/train/table_0080_normalised
+table/train/table_0271_normalised
+table/train/table_0086_normalised
+table/train/table_0349_normalised
+table/train/table_0277_normalised
+table/train/table_0003_normalised
+table/train/table_0126_normalised
+table/train/table_0094_normalised
+table/train/table_0238_normalised
+table/train/table_0326_normalised
+table/train/table_0072_normalised
+table/train/table_0230_normalised
+table/train/table_0005_normalised
+table/train/table_0357_normalised
+table/train/table_0121_normalised
+table/train/table_0064_normalised
+table/train/table_0143_normalised
+table/train/table_0033_normalised
+table/train/table_0031_normalised
+table/train/table_0231_normalised
+table/train/table_0208_normalised
+table/train/table_0265_normalised
+table/train/table_0105_normalised
+table/train/table_0258_normalised
+table/train/table_0142_normalised
+table/train/table_0051_normalised
+table/train/table_0133_normalised
+table/train/table_0137_normalised
+table/train/table_0103_normalised
+table/train/table_0386_normalised
+table/train/table_0269_normalised
+table/train/table_0171_normalised
+table/train/table_0384_normalised
+table/train/table_0166_normalised
+table/train/table_0302_normalised
+table/train/table_0298_normalised
+table/train/table_0022_normalised
+table/train/table_0191_normalised
+table/train/table_0205_normalised
+table/train/table_0047_normalised
+table/train/table_0029_normalised
+table/train/table_0291_normalised
+table/train/table_0299_normalised
+table/train/table_0305_normalised
+table/train/table_0145_normalised
+table/train/table_0188_normalised
+table/train/table_0213_normalised
+table/train/table_0189_normalised
+table/train/table_0101_normalised
+table/train/table_0304_normalised
+table/train/table_0165_normalised
+table/train/table_0098_normalised
+table/train/table_0061_normalised
+table/train/table_0227_normalised
+table/train/table_0330_normalised
+table/train/table_0032_normalised
+table/train/table_0063_normalised
+table/train/table_0148_normalised
+table/train/table_0358_normalised
+table/train/table_0211_normalised
+table/train/table_0174_normalised
+table/train/table_0007_normalised
+table/train/table_0303_normalised
+table/train/table_0200_normalised
+table/train/table_0346_normalised
+table/train/table_0351_normalised
+table/train/table_0377_normalised
+table/train/table_0320_normalised
+table/train/table_0340_normalised
+table/train/table_0161_normalised
+table/train/table_0178_normalised
+table/train/table_0275_normalised
+table/train/table_0337_normalised
+table/train/table_0008_normalised
+table/train/table_0045_normalised
+table/train/table_0325_normalised
+table/train/table_0196_normalised
+table/train/table_0160_normalised
+table/train/table_0173_normalised
+table/train/table_0135_normalised
+table/train/table_0017_normalised
+table/train/table_0292_normalised
+table/train/table_0039_normalised
+table/train/table_0318_normalised
+table/train/table_0077_normalised
+table/train/table_0248_normalised
+table/train/table_0225_normalised
+table/train/table_0319_normalised
+table/train/table_0372_normalised
+table/train/table_0154_normalised
+table/train/table_0106_normalised
+table/train/table_0048_normalised
+table/train/table_0288_normalised
+table/train/table_0186_normalised
+table/train/table_0057_normalised
+table/train/table_0355_normalised
+table/train/table_0363_normalised
+table/train/table_0164_normalised
+table/train/table_0104_normalised
+table/train/table_0353_normalised
+table/train/table_0347_normalised
+table/train/table_0198_normalised
+table/train/table_0193_normalised
+table/train/table_0348_normalised
+table/train/table_0210_normalised
+table/train/table_0023_normalised
+table/train/table_0006_normalised
+table/train/table_0043_normalised
+table/train/table_0081_normalised
+table/train/table_0038_normalised
+table/train/table_0090_normalised
+table/train/table_0242_normalised
+table/train/table_0172_normalised
+table/train/table_0146_normalised
+table/train/table_0084_normalised
+table/train/table_0167_normalised
+table/train/table_0040_normalised
+table/train/table_0309_normalised
+table/train/table_0253_normalised
+table/train/table_0149_normalised
+table/train/table_0392_normalised
+table/train/table_0350_normalised
+table/train/table_0311_normalised
+table/train/table_0279_normalised
+table/train/table_0001_normalised
+table/train/table_0138_normalised
+table/train/table_0120_normalised
+table/train/table_0314_normalised
+table/train/table_0020_normalised
+table/train/table_0067_normalised
+table/train/table_0088_normalised
+table/train/table_0241_normalised
+table/train/table_0141_normalised
+table/train/table_0266_normalised
+table/train/table_0147_normalised
+table/train/table_0197_normalised
+table/train/table_0117_normalised
+table/train/table_0371_normalised
+table/train/table_0321_normalised
+table/train/table_0287_normalised
+table/train/table_0132_normalised
+table/train/table_0181_normalised
+table/train/table_0096_normalised
+table/train/table_0239_normalised
+table/train/table_0071_normalised
+table/train/table_0335_normalised
+table/train/table_0058_normalised
+table/train/table_0278_normalised
+table/train/table_0223_normalised
+table/train/table_0361_normalised
+table/train/table_0270_normalised
+table/train/table_0065_normalised
+table/train/table_0367_normalised
+table/train/table_0062_normalised
+table/train/table_0144_normalised
+table/train/table_0365_normalised
+table/train/table_0212_normalised
+table/train/table_0252_normalised
+table/train/table_0280_normalised
+table/train/table_0054_normalised
+table/train/table_0201_normalised
+table/train/table_0115_normalised
+table/train/table_0284_normalised
+table/train/table_0087_normalised
+table/train/table_0328_normalised
+table/train/table_0195_normalised
+table/train/table_0206_normalised
+table/train/table_0312_normalised
+table/train/table_0235_normalised
+table/train/table_0375_normalised
+table/train/table_0093_normalised
+table/train/table_0187_normalised
+table/test/table_0487_normalised
+table/test/table_0476_normalised
+table/test/table_0471_normalised
+table/test/table_0434_normalised
+table/test/table_0459_normalised
+table/test/table_0449_normalised
+table/test/table_0477_normalised
+table/test/table_0443_normalised
+table/test/table_0492_normalised
+table/test/table_0437_normalised
+table/test/table_0422_normalised
+table/test/table_0468_normalised
+table/test/table_0483_normalised
+table/test/table_0441_normalised
+table/test/table_0438_normalised
+table/test/table_0419_normalised
+table/test/table_0467_normalised
+table/test/table_0474_normalised
+table/test/table_0399_normalised
+table/test/table_0420_normalised
+table/test/table_0489_normalised
+table/test/table_0396_normalised
+table/test/table_0430_normalised
+table/test/table_0446_normalised
+table/test/table_0415_normalised
+table/test/table_0488_normalised
+table/test/table_0393_normalised
+table/test/table_0421_normalised
+table/test/table_0416_normalised
+table/test/table_0448_normalised
+table/test/table_0482_normalised
+table/test/table_0394_normalised
+table/test/table_0465_normalised
+table/test/table_0417_normalised
+table/test/table_0432_normalised
+table/test/table_0484_normalised
+table/test/table_0479_normalised
+table/test/table_0455_normalised
+table/test/table_0480_normalised
+table/test/table_0463_normalised
+table/test/table_0408_normalised
+table/test/table_0426_normalised
+table/test/table_0444_normalised
+table/test/table_0466_normalised
+table/test/table_0411_normalised
+table/test/table_0460_normalised
+table/test/table_0407_normalised
+table/test/table_0404_normalised
+table/test/table_0486_normalised
+table/test/table_0427_normalised
+table/test/table_0406_normalised
+table/test/table_0464_normalised
+table/test/table_0447_normalised
+table/test/table_0429_normalised
+table/test/table_0414_normalised
+table/test/table_0451_normalised
+table/test/table_0461_normalised
+table/test/table_0481_normalised
+table/test/table_0398_normalised
+table/test/table_0439_normalised
+table/test/table_0412_normalised
+table/test/table_0431_normalised
+table/test/table_0395_normalised
+table/test/table_0445_normalised
+table/test/table_0440_normalised
+table/test/table_0454_normalised
+table/test/table_0433_normalised
+table/test/table_0453_normalised
+table/test/table_0462_normalised
+table/test/table_0470_normalised
+table/test/table_0402_normalised
+table/test/table_0452_normalised
+table/test/table_0473_normalised
+table/test/table_0428_normalised
+table/test/table_0490_normalised
+table/test/table_0436_normalised
+table/test/table_0405_normalised
+table/test/table_0423_normalised
+table/test/table_0400_normalised
+table/test/table_0424_normalised
+table/test/table_0435_normalised
+table/test/table_0485_normalised
+table/test/table_0472_normalised
+table/test/table_0491_normalised
+table/test/table_0413_normalised
+table/test/table_0457_normalised
+table/test/table_0475_normalised
+table/test/table_0458_normalised
+table/test/table_0397_normalised
+table/test/table_0401_normalised
+table/test/table_0478_normalised
+table/test/table_0410_normalised
+table/test/table_0469_normalised
+table/test/table_0442_normalised
+table/test/table_0450_normalised
+table/test/table_0409_normalised
+table/test/table_0403_normalised
+table/test/table_0425_normalised
+table/test/table_0418_normalised
+table/test/table_0456_normalised
+door/train/door_0091_normalised
+door/train/door_0074_normalised
+door/train/door_0056_normalised
+door/train/door_0102_normalised
+door/train/door_0012_normalised
+door/train/door_0079_normalised
+door/train/door_0023_normalised
+door/train/door_0032_normalised
+door/train/door_0059_normalised
+door/train/door_0064_normalised
+door/train/door_0003_normalised
+door/train/door_0109_normalised
+door/train/door_0036_normalised
+door/train/door_0068_normalised
+door/train/door_0019_normalised
+door/train/door_0037_normalised
+door/train/door_0047_normalised
+door/train/door_0100_normalised
+door/train/door_0062_normalised
+door/train/door_0098_normalised
+door/train/door_0106_normalised
+door/train/door_0055_normalised
+door/train/door_0014_normalised
+door/train/door_0051_normalised
+door/train/door_0021_normalised
+door/train/door_0029_normalised
+door/train/door_0025_normalised
+door/train/door_0066_normalised
+door/train/door_0085_normalised
+door/train/door_0052_normalised
+door/train/door_0015_normalised
+door/train/door_0050_normalised
+door/train/door_0080_normalised
+door/train/door_0099_normalised
+door/train/door_0078_normalised
+door/train/door_0013_normalised
+door/train/door_0087_normalised
+door/train/door_0028_normalised
+door/train/door_0081_normalised
+door/train/door_0006_normalised
+door/train/door_0016_normalised
+door/train/door_0076_normalised
+door/train/door_0017_normalised
+door/train/door_0065_normalised
+door/train/door_0020_normalised
+door/train/door_0070_normalised
+door/train/door_0095_normalised
+door/train/door_0018_normalised
+door/train/door_0105_normalised
+door/train/door_0008_normalised
+door/train/door_0043_normalised
+door/train/door_0088_normalised
+door/train/door_0007_normalised
+door/train/door_0046_normalised
+door/train/door_0061_normalised
+door/train/door_0101_normalised
+door/train/door_0041_normalised
+door/train/door_0075_normalised
+door/train/door_0083_normalised
+door/train/door_0104_normalised
+door/train/door_0094_normalised
+door/train/door_0108_normalised
+door/train/door_0084_normalised
+door/train/door_0001_normalised
+door/train/door_0071_normalised
+door/train/door_0053_normalised
+door/train/door_0033_normalised
+door/train/door_0034_normalised
+door/train/door_0027_normalised
+door/train/door_0093_normalised
+door/train/door_0063_normalised
+door/train/door_0058_normalised
+door/train/door_0038_normalised
+door/train/door_0039_normalised
+door/train/door_0103_normalised
+door/train/door_0089_normalised
+door/train/door_0026_normalised
+door/train/door_0067_normalised
+door/train/door_0096_normalised
+door/train/door_0004_normalised
+door/train/door_0022_normalised
+door/train/door_0044_normalised
+door/train/door_0040_normalised
+door/train/door_0024_normalised
+door/train/door_0090_normalised
+door/train/door_0011_normalised
+door/train/door_0077_normalised
+door/train/door_0086_normalised
+door/train/door_0045_normalised
+door/train/door_0097_normalised
+door/train/door_0107_normalised
+door/train/door_0082_normalised
+door/train/door_0048_normalised
+door/train/door_0005_normalised
+door/train/door_0009_normalised
+door/train/door_0002_normalised
+door/train/door_0057_normalised
+door/train/door_0054_normalised
+door/train/door_0049_normalised
+door/train/door_0031_normalised
+door/train/door_0092_normalised
+door/train/door_0042_normalised
+door/train/door_0035_normalised
+door/train/door_0069_normalised
+door/train/door_0060_normalised
+door/train/door_0010_normalised
+door/train/door_0072_normalised
+door/train/door_0073_normalised
+door/train/door_0030_normalised
+door/test/door_0118_normalised
+door/test/door_0127_normalised
+door/test/door_0126_normalised
+door/test/door_0120_normalised
+door/test/door_0113_normalised
+door/test/door_0123_normalised
+door/test/door_0121_normalised
+door/test/door_0128_normalised
+door/test/door_0115_normalised
+door/test/door_0114_normalised
+door/test/door_0122_normalised
+door/test/door_0111_normalised
+door/test/door_0112_normalised
+door/test/door_0117_normalised
+door/test/door_0129_normalised
+door/test/door_0110_normalised
+door/test/door_0124_normalised
+door/test/door_0116_normalised
+door/test/door_0119_normalised
+door/test/door_0125_normalised
+sink/train/sink_0123_normalised
+sink/train/sink_0069_normalised
+sink/train/sink_0067_normalised
+sink/train/sink_0066_normalised
+sink/train/sink_0120_normalised
+sink/train/sink_0092_normalised
+sink/train/sink_0048_normalised
+sink/train/sink_0023_normalised
+sink/train/sink_0094_normalised
+sink/train/sink_0002_normalised
+sink/train/sink_0103_normalised
+sink/train/sink_0118_normalised
+sink/train/sink_0059_normalised
+sink/train/sink_0122_normalised
+sink/train/sink_0029_normalised
+sink/train/sink_0003_normalised
+sink/train/sink_0009_normalised
+sink/train/sink_0040_normalised
+sink/train/sink_0056_normalised
+sink/train/sink_0017_normalised
+sink/train/sink_0076_normalised
+sink/train/sink_0098_normalised
+sink/train/sink_0038_normalised
+sink/train/sink_0093_normalised
+sink/train/sink_0063_normalised
+sink/train/sink_0062_normalised
+sink/train/sink_0045_normalised
+sink/train/sink_0099_normalised
+sink/train/sink_0078_normalised
+sink/train/sink_0102_normalised
+sink/train/sink_0020_normalised
+sink/train/sink_0112_normalised
+sink/train/sink_0026_normalised
+sink/train/sink_0064_normalised
+sink/train/sink_0001_normalised
+sink/train/sink_0024_normalised
+sink/train/sink_0071_normalised
+sink/train/sink_0007_normalised
+sink/train/sink_0049_normalised
+sink/train/sink_0060_normalised
+sink/train/sink_0041_normalised
+sink/train/sink_0008_normalised
+sink/train/sink_0019_normalised
+sink/train/sink_0035_normalised
+sink/train/sink_0033_normalised
+sink/train/sink_0014_normalised
+sink/train/sink_0039_normalised
+sink/train/sink_0013_normalised
+sink/train/sink_0113_normalised
+sink/train/sink_0051_normalised
+sink/train/sink_0104_normalised
+sink/train/sink_0089_normalised
+sink/train/sink_0101_normalised
+sink/train/sink_0090_normalised
+sink/train/sink_0125_normalised
+sink/train/sink_0107_normalised
+sink/train/sink_0083_normalised
+sink/train/sink_0119_normalised
+sink/train/sink_0096_normalised
+sink/train/sink_0055_normalised
+sink/train/sink_0121_normalised
+sink/train/sink_0097_normalised
+sink/train/sink_0085_normalised
+sink/train/sink_0005_normalised
+sink/train/sink_0022_normalised
+sink/train/sink_0079_normalised
+sink/train/sink_0070_normalised
+sink/train/sink_0047_normalised
+sink/train/sink_0031_normalised
+sink/train/sink_0010_normalised
+sink/train/sink_0015_normalised
+sink/train/sink_0106_normalised
+sink/train/sink_0117_normalised
+sink/train/sink_0028_normalised
+sink/train/sink_0065_normalised
+sink/train/sink_0128_normalised
+sink/train/sink_0077_normalised
+sink/train/sink_0036_normalised
+sink/train/sink_0086_normalised
+sink/train/sink_0072_normalised
+sink/train/sink_0124_normalised
+sink/train/sink_0084_normalised
+sink/train/sink_0030_normalised
+sink/train/sink_0091_normalised
+sink/train/sink_0114_normalised
+sink/train/sink_0074_normalised
+sink/train/sink_0046_normalised
+sink/train/sink_0087_normalised
+sink/train/sink_0012_normalised
+sink/train/sink_0068_normalised
+sink/train/sink_0016_normalised
+sink/train/sink_0050_normalised
+sink/train/sink_0006_normalised
+sink/train/sink_0054_normalised
+sink/train/sink_0105_normalised
+sink/train/sink_0111_normalised
+sink/train/sink_0053_normalised
+sink/train/sink_0115_normalised
+sink/train/sink_0075_normalised
+sink/train/sink_0032_normalised
+sink/train/sink_0021_normalised
+sink/train/sink_0058_normalised
+sink/train/sink_0127_normalised
+sink/train/sink_0037_normalised
+sink/train/sink_0057_normalised
+sink/train/sink_0043_normalised
+sink/train/sink_0126_normalised
+sink/train/sink_0088_normalised
+sink/train/sink_0110_normalised
+sink/train/sink_0095_normalised
+sink/train/sink_0080_normalised
+sink/train/sink_0061_normalised
+sink/train/sink_0100_normalised
+sink/train/sink_0109_normalised
+sink/train/sink_0108_normalised
+sink/train/sink_0025_normalised
+sink/train/sink_0004_normalised
+sink/train/sink_0011_normalised
+sink/train/sink_0044_normalised
+sink/train/sink_0042_normalised
+sink/train/sink_0027_normalised
+sink/train/sink_0116_normalised
+sink/train/sink_0052_normalised
+sink/train/sink_0034_normalised
+sink/train/sink_0073_normalised
+sink/train/sink_0081_normalised
+sink/train/sink_0018_normalised
+sink/train/sink_0082_normalised
+sink/test/sink_0147_normalised
+sink/test/sink_0145_normalised
+sink/test/sink_0138_normalised
+sink/test/sink_0141_normalised
+sink/test/sink_0143_normalised
+sink/test/sink_0142_normalised
+sink/test/sink_0135_normalised
+sink/test/sink_0148_normalised
+sink/test/sink_0132_normalised
+sink/test/sink_0137_normalised
+sink/test/sink_0139_normalised
+sink/test/sink_0131_normalised
+sink/test/sink_0129_normalised
+sink/test/sink_0144_normalised
+sink/test/sink_0134_normalised
+sink/test/sink_0136_normalised
+sink/test/sink_0146_normalised
+sink/test/sink_0130_normalised
+sink/test/sink_0133_normalised
+sink/test/sink_0140_normalised
+car/train/car_0076_normalised
+car/train/car_0069_normalised
+car/train/car_0140_normalised
+car/train/car_0132_normalised
+car/train/car_0033_normalised
+car/train/car_0094_normalised
+car/train/car_0099_normalised
+car/train/car_0191_normalised
+car/train/car_0117_normalised
+car/train/car_0034_normalised
+car/train/car_0122_normalised
+car/train/car_0045_normalised
+car/train/car_0022_normalised
+car/train/car_0058_normalised
+car/train/car_0181_normalised
+car/train/car_0164_normalised
+car/train/car_0037_normalised
+car/train/car_0060_normalised
+car/train/car_0116_normalised
+car/train/car_0068_normalised
+car/train/car_0012_normalised
+car/train/car_0088_normalised
+car/train/car_0137_normalised
+car/train/car_0196_normalised
+car/train/car_0013_normalised
+car/train/car_0010_normalised
+car/train/car_0129_normalised
+car/train/car_0080_normalised
+car/train/car_0001_normalised
+car/train/car_0153_normalised
+car/train/car_0113_normalised
+car/train/car_0028_normalised
+car/train/car_0084_normalised
+car/train/car_0158_normalised
+car/train/car_0110_normalised
+car/train/car_0051_normalised
+car/train/car_0149_normalised
+car/train/car_0077_normalised
+car/train/car_0097_normalised
+car/train/car_0050_normalised
+car/train/car_0102_normalised
+car/train/car_0124_normalised
+car/train/car_0105_normalised
+car/train/car_0166_normalised
+car/train/car_0019_normalised
+car/train/car_0123_normalised
+car/train/car_0165_normalised
+car/train/car_0091_normalised
+car/train/car_0154_normalised
+car/train/car_0145_normalised
+car/train/car_0152_normalised
+car/train/car_0187_normalised
+car/train/car_0133_normalised
+car/train/car_0176_normalised
+car/train/car_0173_normalised
+car/train/car_0115_normalised
+car/train/car_0017_normalised
+car/train/car_0189_normalised
+car/train/car_0042_normalised
+car/train/car_0139_normalised
+car/train/car_0066_normalised
+car/train/car_0182_normalised
+car/train/car_0190_normalised
+car/train/car_0086_normalised
+car/train/car_0043_normalised
+car/train/car_0141_normalised
+car/train/car_0138_normalised
+car/train/car_0036_normalised
+car/train/car_0135_normalised
+car/train/car_0089_normalised
+car/train/car_0160_normalised
+car/train/car_0159_normalised
+car/train/car_0194_normalised
+car/train/car_0188_normalised
+car/train/car_0039_normalised
+car/train/car_0067_normalised
+car/train/car_0179_normalised
+car/train/car_0057_normalised
+car/train/car_0040_normalised
+car/train/car_0192_normalised
+car/train/car_0061_normalised
+car/train/car_0035_normalised
+car/train/car_0169_normalised
+car/train/car_0063_normalised
+car/train/car_0163_normalised
+car/train/car_0156_normalised
+car/train/car_0171_normalised
+car/train/car_0082_normalised
+car/train/car_0093_normalised
+car/train/car_0120_normalised
+car/train/car_0087_normalised
+car/train/car_0044_normalised
+car/train/car_0026_normalised
+car/train/car_0119_normalised
+car/train/car_0178_normalised
+car/train/car_0128_normalised
+car/train/car_0193_normalised
+car/train/car_0056_normalised
+car/train/car_0162_normalised
+car/train/car_0146_normalised
+car/train/car_0168_normalised
+car/train/car_0020_normalised
+car/train/car_0070_normalised
+car/train/car_0073_normalised
+car/train/car_0024_normalised
+car/train/car_0114_normalised
+car/train/car_0142_normalised
+car/train/car_0007_normalised
+car/train/car_0046_normalised
+car/train/car_0111_normalised
+car/train/car_0130_normalised
+car/train/car_0100_normalised
+car/train/car_0150_normalised
+car/train/car_0072_normalised
+car/train/car_0112_normalised
+car/train/car_0003_normalised
+car/train/car_0006_normalised
+car/train/car_0195_normalised
+car/train/car_0055_normalised
+car/train/car_0186_normalised
+car/train/car_0108_normalised
+car/train/car_0155_normalised
+car/train/car_0126_normalised
+car/train/car_0021_normalised
+car/train/car_0032_normalised
+car/train/car_0157_normalised
+car/train/car_0098_normalised
+car/train/car_0104_normalised
+car/train/car_0075_normalised
+car/train/car_0004_normalised
+car/train/car_0136_normalised
+car/train/car_0177_normalised
+car/train/car_0151_normalised
+car/train/car_0078_normalised
+car/train/car_0049_normalised
+car/train/car_0125_normalised
+car/train/car_0197_normalised
+car/train/car_0071_normalised
+car/train/car_0059_normalised
+car/train/car_0062_normalised
+car/train/car_0118_normalised
+car/train/car_0095_normalised
+car/train/car_0183_normalised
+car/train/car_0134_normalised
+car/train/car_0018_normalised
+car/train/car_0016_normalised
+car/train/car_0053_normalised
+car/train/car_0096_normalised
+car/train/car_0131_normalised
+car/train/car_0170_normalised
+car/train/car_0106_normalised
+car/train/car_0064_normalised
+car/train/car_0011_normalised
+car/train/car_0009_normalised
+car/train/car_0002_normalised
+car/train/car_0143_normalised
+car/train/car_0175_normalised
+car/train/car_0174_normalised
+car/train/car_0005_normalised
+car/train/car_0109_normalised
+car/train/car_0008_normalised
+car/train/car_0144_normalised
+car/train/car_0083_normalised
+car/train/car_0014_normalised
+car/train/car_0180_normalised
+car/train/car_0081_normalised
+car/train/car_0127_normalised
+car/train/car_0092_normalised
+car/train/car_0147_normalised
+car/train/car_0172_normalised
+car/train/car_0029_normalised
+car/train/car_0185_normalised
+car/train/car_0052_normalised
+car/train/car_0025_normalised
+car/train/car_0079_normalised
+car/train/car_0107_normalised
+car/train/car_0090_normalised
+car/train/car_0038_normalised
+car/train/car_0031_normalised
+car/train/car_0065_normalised
+car/train/car_0023_normalised
+car/train/car_0167_normalised
+car/train/car_0030_normalised
+car/train/car_0103_normalised
+car/train/car_0041_normalised
+car/train/car_0101_normalised
+car/train/car_0085_normalised
+car/train/car_0148_normalised
+car/train/car_0047_normalised
+car/train/car_0054_normalised
+car/train/car_0184_normalised
+car/train/car_0015_normalised
+car/train/car_0027_normalised
+car/train/car_0161_normalised
+car/train/car_0048_normalised
+car/train/car_0121_normalised
+car/train/car_0074_normalised
+car/test/car_0269_normalised
+car/test/car_0207_normalised
+car/test/car_0243_normalised
+car/test/car_0232_normalised
+car/test/car_0231_normalised
+car/test/car_0266_normalised
+car/test/car_0270_normalised
+car/test/car_0295_normalised
+car/test/car_0200_normalised
+car/test/car_0285_normalised
+car/test/car_0248_normalised
+car/test/car_0249_normalised
+car/test/car_0225_normalised
+car/test/car_0224_normalised
+car/test/car_0283_normalised
+car/test/car_0241_normalised
+car/test/car_0260_normalised
+car/test/car_0234_normalised
+car/test/car_0219_normalised
+car/test/car_0272_normalised
+car/test/car_0263_normalised
+car/test/car_0282_normalised
+car/test/car_0216_normalised
+car/test/car_0256_normalised
+car/test/car_0281_normalised
+car/test/car_0247_normalised
+car/test/car_0250_normalised
+car/test/car_0228_normalised
+car/test/car_0218_normalised
+car/test/car_0252_normalised
+car/test/car_0259_normalised
+car/test/car_0267_normalised
+car/test/car_0235_normalised
+car/test/car_0291_normalised
+car/test/car_0239_normalised
+car/test/car_0233_normalised
+car/test/car_0222_normalised
+car/test/car_0206_normalised
+car/test/car_0230_normalised
+car/test/car_0253_normalised
+car/test/car_0268_normalised
+car/test/car_0278_normalised
+car/test/car_0258_normalised
+car/test/car_0226_normalised
+car/test/car_0290_normalised
+car/test/car_0276_normalised
+car/test/car_0203_normalised
+car/test/car_0229_normalised
+car/test/car_0257_normalised
+car/test/car_0213_normalised
+car/test/car_0205_normalised
+car/test/car_0210_normalised
+car/test/car_0292_normalised
+car/test/car_0215_normalised
+car/test/car_0261_normalised
+car/test/car_0221_normalised
+car/test/car_0262_normalised
+car/test/car_0297_normalised
+car/test/car_0240_normalised
+car/test/car_0204_normalised
+car/test/car_0286_normalised
+car/test/car_0296_normalised
+car/test/car_0264_normalised
+car/test/car_0244_normalised
+car/test/car_0198_normalised
+car/test/car_0199_normalised
+car/test/car_0254_normalised
+car/test/car_0273_normalised
+car/test/car_0271_normalised
+car/test/car_0211_normalised
+car/test/car_0201_normalised
+car/test/car_0242_normalised
+car/test/car_0246_normalised
+car/test/car_0227_normalised
+car/test/car_0208_normalised
+car/test/car_0220_normalised
+car/test/car_0202_normalised
+car/test/car_0289_normalised
+car/test/car_0214_normalised
+car/test/car_0209_normalised
+car/test/car_0288_normalised
+car/test/car_0251_normalised
+car/test/car_0217_normalised
+car/test/car_0294_normalised
+car/test/car_0255_normalised
+car/test/car_0245_normalised
+car/test/car_0223_normalised
+car/test/car_0238_normalised
+car/test/car_0279_normalised
+car/test/car_0237_normalised
+car/test/car_0236_normalised
+car/test/car_0265_normalised
+car/test/car_0284_normalised
+car/test/car_0274_normalised
+car/test/car_0277_normalised
+car/test/car_0280_normalised
+car/test/car_0287_normalised
+car/test/car_0275_normalised
+car/test/car_0212_normalised
+car/test/car_0293_normalised
+cup/train/cup_0075_normalised
+cup/train/cup_0033_normalised
+cup/train/cup_0044_normalised
+cup/train/cup_0060_normalised
+cup/train/cup_0009_normalised
+cup/train/cup_0008_normalised
+cup/train/cup_0079_normalised
+cup/train/cup_0051_normalised
+cup/train/cup_0029_normalised
+cup/train/cup_0045_normalised
+cup/train/cup_0052_normalised
+cup/train/cup_0066_normalised
+cup/train/cup_0006_normalised
+cup/train/cup_0028_normalised
+cup/train/cup_0022_normalised
+cup/train/cup_0021_normalised
+cup/train/cup_0034_normalised
+cup/train/cup_0037_normalised
+cup/train/cup_0077_normalised
+cup/train/cup_0043_normalised
+cup/train/cup_0030_normalised
+cup/train/cup_0035_normalised
+cup/train/cup_0036_normalised
+cup/train/cup_0076_normalised
+cup/train/cup_0001_normalised
+cup/train/cup_0003_normalised
+cup/train/cup_0049_normalised
+cup/train/cup_0073_normalised
+cup/train/cup_0017_normalised
+cup/train/cup_0070_normalised
+cup/train/cup_0072_normalised
+cup/train/cup_0032_normalised
+cup/train/cup_0007_normalised
+cup/train/cup_0038_normalised
+cup/train/cup_0015_normalised
+cup/train/cup_0064_normalised
+cup/train/cup_0039_normalised
+cup/train/cup_0071_normalised
+cup/train/cup_0025_normalised
+cup/train/cup_0014_normalised
+cup/train/cup_0012_normalised
+cup/train/cup_0042_normalised
+cup/train/cup_0040_normalised
+cup/train/cup_0062_normalised
+cup/train/cup_0004_normalised
+cup/train/cup_0048_normalised
+cup/train/cup_0050_normalised
+cup/train/cup_0074_normalised
+cup/train/cup_0019_normalised
+cup/train/cup_0023_normalised
+cup/train/cup_0061_normalised
+cup/train/cup_0068_normalised
+cup/train/cup_0069_normalised
+cup/train/cup_0056_normalised
+cup/train/cup_0002_normalised
+cup/train/cup_0005_normalised
+cup/train/cup_0031_normalised
+cup/train/cup_0020_normalised
+cup/train/cup_0013_normalised
+cup/train/cup_0041_normalised
+cup/train/cup_0018_normalised
+cup/train/cup_0046_normalised
+cup/train/cup_0063_normalised
+cup/train/cup_0016_normalised
+cup/train/cup_0010_normalised
+cup/train/cup_0011_normalised
+cup/train/cup_0055_normalised
+cup/train/cup_0065_normalised
+cup/train/cup_0024_normalised
+cup/train/cup_0057_normalised
+cup/train/cup_0027_normalised
+cup/train/cup_0058_normalised
+cup/train/cup_0026_normalised
+cup/train/cup_0078_normalised
+cup/train/cup_0054_normalised
+cup/train/cup_0053_normalised
+cup/train/cup_0067_normalised
+cup/train/cup_0059_normalised
+cup/train/cup_0047_normalised
+cup/test/cup_0083_normalised
+cup/test/cup_0097_normalised
+cup/test/cup_0098_normalised
+cup/test/cup_0091_normalised
+cup/test/cup_0080_normalised
+cup/test/cup_0090_normalised
+cup/test/cup_0095_normalised
+cup/test/cup_0088_normalised
+cup/test/cup_0094_normalised
+cup/test/cup_0084_normalised
+cup/test/cup_0087_normalised
+cup/test/cup_0092_normalised
+cup/test/cup_0099_normalised
+cup/test/cup_0082_normalised
+cup/test/cup_0086_normalised
+cup/test/cup_0089_normalised
+cup/test/cup_0096_normalised
+cup/test/cup_0085_normalised
+cup/test/cup_0093_normalised
+cup/test/cup_0081_normalised
+airplane/train/airplane_0486_normalised
+airplane/train/airplane_0374_normalised
+airplane/train/airplane_0316_normalised
+airplane/train/airplane_0537_normalised
+airplane/train/airplane_0284_normalised
+airplane/train/airplane_0609_normalised
+airplane/train/airplane_0086_normalised
+airplane/train/airplane_0554_normalised
+airplane/train/airplane_0307_normalised
+airplane/train/airplane_0015_normalised
+airplane/train/airplane_0567_normalised
+airplane/train/airplane_0082_normalised
+airplane/train/airplane_0150_normalised
+airplane/train/airplane_0415_normalised
+airplane/train/airplane_0289_normalised
+airplane/train/airplane_0594_normalised
+airplane/train/airplane_0579_normalised
+airplane/train/airplane_0279_normalised
+airplane/train/airplane_0060_normalised
+airplane/train/airplane_0499_normalised
+airplane/train/airplane_0165_normalised
+airplane/train/airplane_0555_normalised
+airplane/train/airplane_0389_normalised
+airplane/train/airplane_0049_normalised
+airplane/train/airplane_0067_normalised
+airplane/train/airplane_0286_normalised
+airplane/train/airplane_0238_normalised
+airplane/train/airplane_0035_normalised
+airplane/train/airplane_0129_normalised
+airplane/train/airplane_0128_normalised
+airplane/train/airplane_0033_normalised
+airplane/train/airplane_0283_normalised
+airplane/train/airplane_0355_normalised
+airplane/train/airplane_0502_normalised
+airplane/train/airplane_0148_normalised
+airplane/train/airplane_0158_normalised
+airplane/train/airplane_0477_normalised
+airplane/train/airplane_0130_normalised
+airplane/train/airplane_0169_normalised
+airplane/train/airplane_0300_normalised
+airplane/train/airplane_0242_normalised
+airplane/train/airplane_0348_normalised
+airplane/train/airplane_0222_normalised
+airplane/train/airplane_0253_normalised
+airplane/train/airplane_0402_normalised
+airplane/train/airplane_0589_normalised
+airplane/train/airplane_0187_normalised
+airplane/train/airplane_0014_normalised
+airplane/train/airplane_0503_normalised
+airplane/train/airplane_0351_normalised
+airplane/train/airplane_0443_normalised
+airplane/train/airplane_0505_normalised
+airplane/train/airplane_0020_normalised
+airplane/train/airplane_0543_normalised
+airplane/train/airplane_0101_normalised
+airplane/train/airplane_0298_normalised
+airplane/train/airplane_0041_normalised
+airplane/train/airplane_0133_normalised
+airplane/train/airplane_0516_normalised
+airplane/train/airplane_0079_normalised
+airplane/train/airplane_0484_normalised
+airplane/train/airplane_0444_normalised
+airplane/train/airplane_0264_normalised
+airplane/train/airplane_0353_normalised
+airplane/train/airplane_0310_normalised
+airplane/train/airplane_0291_normalised
+airplane/train/airplane_0449_normalised
+airplane/train/airplane_0439_normalised
+airplane/train/airplane_0448_normalised
+airplane/train/airplane_0593_normalised
+airplane/train/airplane_0229_normalised
+airplane/train/airplane_0483_normalised
+airplane/train/airplane_0110_normalised
+airplane/train/airplane_0456_normalised
+airplane/train/airplane_0492_normalised
+airplane/train/airplane_0285_normalised
+airplane/train/airplane_0622_normalised
+airplane/train/airplane_0474_normalised
+airplane/train/airplane_0387_normalised
+airplane/train/airplane_0200_normalised
+airplane/train/airplane_0277_normalised
+airplane/train/airplane_0297_normalised
+airplane/train/airplane_0190_normalised
+airplane/train/airplane_0199_normalised
+airplane/train/airplane_0454_normalised
+airplane/train/airplane_0495_normalised
+airplane/train/airplane_0215_normalised
+airplane/train/airplane_0604_normalised
+airplane/train/airplane_0100_normalised
+airplane/train/airplane_0162_normalised
+airplane/train/airplane_0152_normalised
+airplane/train/airplane_0026_normalised
+airplane/train/airplane_0626_normalised
+airplane/train/airplane_0466_normalised
+airplane/train/airplane_0207_normalised
+airplane/train/airplane_0252_normalised
+airplane/train/airplane_0008_normalised
+airplane/train/airplane_0075_normalised
+airplane/train/airplane_0544_normalised
+airplane/train/airplane_0420_normalised
+airplane/train/airplane_0102_normalised
+airplane/train/airplane_0388_normalised
+airplane/train/airplane_0142_normalised
+airplane/train/airplane_0408_normalised
+airplane/train/airplane_0401_normalised
+airplane/train/airplane_0417_normalised
+airplane/train/airplane_0216_normalised
+airplane/train/airplane_0381_normalised
+airplane/train/airplane_0550_normalised
+airplane/train/airplane_0112_normalised
+airplane/train/airplane_0360_normalised
+airplane/train/airplane_0053_normalised
+airplane/train/airplane_0571_normalised
+airplane/train/airplane_0313_normalised
+airplane/train/airplane_0205_normalised
+airplane/train/airplane_0214_normalised
+airplane/train/airplane_0052_normalised
+airplane/train/airplane_0168_normalised
+airplane/train/airplane_0188_normalised
+airplane/train/airplane_0421_normalised
+airplane/train/airplane_0383_normalised
+airplane/train/airplane_0469_normalised
+airplane/train/airplane_0156_normalised
+airplane/train/airplane_0009_normalised
+airplane/train/airplane_0467_normalised
+airplane/train/airplane_0329_normalised
+airplane/train/airplane_0559_normalised
+airplane/train/airplane_0221_normalised
+airplane/train/airplane_0029_normalised
+airplane/train/airplane_0451_normalised
+airplane/train/airplane_0465_normalised
+airplane/train/airplane_0294_normalised
+airplane/train/airplane_0006_normalised
+airplane/train/airplane_0603_normalised
+airplane/train/airplane_0511_normalised
+airplane/train/airplane_0426_normalised
+airplane/train/airplane_0149_normalised
+airplane/train/airplane_0226_normalised
+airplane/train/airplane_0445_normalised
+airplane/train/airplane_0440_normalised
+airplane/train/airplane_0532_normalised
+airplane/train/airplane_0109_normalised
+airplane/train/airplane_0614_normalised
+airplane/train/airplane_0262_normalised
+airplane/train/airplane_0295_normalised
+airplane/train/airplane_0083_normalised
+airplane/train/airplane_0527_normalised
+airplane/train/airplane_0090_normalised
+airplane/train/airplane_0338_normalised
+airplane/train/airplane_0007_normalised
+airplane/train/airplane_0078_normalised
+airplane/train/airplane_0280_normalised
+airplane/train/airplane_0612_normalised
+airplane/train/airplane_0021_normalised
+airplane/train/airplane_0494_normalised
+airplane/train/airplane_0120_normalised
+airplane/train/airplane_0065_normalised
+airplane/train/airplane_0303_normalised
+airplane/train/airplane_0380_normalised
+airplane/train/airplane_0151_normalised
+airplane/train/airplane_0260_normalised
+airplane/train/airplane_0163_normalised
+airplane/train/airplane_0347_normalised
+airplane/train/airplane_0175_normalised
+airplane/train/airplane_0245_normalised
+airplane/train/airplane_0411_normalised
+airplane/train/airplane_0025_normalised
+airplane/train/airplane_0159_normalised
+airplane/train/airplane_0208_normalised
+airplane/train/airplane_0452_normalised
+airplane/train/airplane_0403_normalised
+airplane/train/airplane_0121_normalised
+airplane/train/airplane_0111_normalised
+airplane/train/airplane_0089_normalised
+airplane/train/airplane_0613_normalised
+airplane/train/airplane_0258_normalised
+airplane/train/airplane_0181_normalised
+airplane/train/airplane_0034_normalised
+airplane/train/airplane_0396_normalised
+airplane/train/airplane_0219_normalised
+airplane/train/airplane_0261_normalised
+airplane/train/airplane_0455_normalised
+airplane/train/airplane_0522_normalised
+airplane/train/airplane_0024_normalised
+airplane/train/airplane_0328_normalised
+airplane/train/airplane_0178_normalised
+airplane/train/airplane_0526_normalised
+airplane/train/airplane_0048_normalised
+airplane/train/airplane_0545_normalised
+airplane/train/airplane_0070_normalised
+airplane/train/airplane_0281_normalised
+airplane/train/airplane_0249_normalised
+airplane/train/airplane_0539_normalised
+airplane/train/airplane_0095_normalised
+airplane/train/airplane_0384_normalised
+airplane/train/airplane_0377_normalised
+airplane/train/airplane_0489_normalised
+airplane/train/airplane_0257_normalised
+airplane/train/airplane_0105_normalised
+airplane/train/airplane_0259_normalised
+airplane/train/airplane_0345_normalised
+airplane/train/airplane_0224_normalised
+airplane/train/airplane_0496_normalised
+airplane/train/airplane_0096_normalised
+airplane/train/airplane_0002_normalised
+airplane/train/airplane_0318_normalised
+airplane/train/airplane_0534_normalised
+airplane/train/airplane_0441_normalised
+airplane/train/airplane_0363_normalised
+airplane/train/airplane_0457_normalised
+airplane/train/airplane_0620_normalised
+airplane/train/airplane_0473_normalised
+airplane/train/airplane_0097_normalised
+airplane/train/airplane_0045_normalised
+airplane/train/airplane_0055_normalised
+airplane/train/airplane_0362_normalised
+airplane/train/airplane_0016_normalised
+airplane/train/airplane_0576_normalised
+airplane/train/airplane_0227_normalised
+airplane/train/airplane_0343_normalised
+airplane/train/airplane_0047_normalised
+airplane/train/airplane_0618_normalised
+airplane/train/airplane_0427_normalised
+airplane/train/airplane_0485_normalised
+airplane/train/airplane_0273_normalised
+airplane/train/airplane_0164_normalised
+airplane/train/airplane_0367_normalised
+airplane/train/airplane_0606_normalised
+airplane/train/airplane_0562_normalised
+airplane/train/airplane_0414_normalised
+airplane/train/airplane_0438_normalised
+airplane/train/airplane_0074_normalised
+airplane/train/airplane_0573_normalised
+airplane/train/airplane_0423_normalised
+airplane/train/airplane_0087_normalised
+airplane/train/airplane_0607_normalised
+airplane/train/airplane_0081_normalised
+airplane/train/airplane_0192_normalised
+airplane/train/airplane_0320_normalised
+airplane/train/airplane_0069_normalised
+airplane/train/airplane_0617_normalised
+airplane/train/airplane_0217_normalised
+airplane/train/airplane_0019_normalised
+airplane/train/airplane_0137_normalised
+airplane/train/airplane_0203_normalised
+airplane/train/airplane_0621_normalised
+airplane/train/airplane_0251_normalised
+airplane/train/airplane_0405_normalised
+airplane/train/airplane_0372_normalised
+airplane/train/airplane_0275_normalised
+airplane/train/airplane_0154_normalised
+airplane/train/airplane_0565_normalised
+airplane/train/airplane_0376_normalised
+airplane/train/airplane_0356_normalised
+airplane/train/airplane_0039_normalised
+airplane/train/airplane_0068_normalised
+airplane/train/airplane_0066_normalised
+airplane/train/airplane_0470_normalised
+airplane/train/airplane_0412_normalised
+airplane/train/airplane_0135_normalised
+airplane/train/airplane_0157_normalised
+airplane/train/airplane_0243_normalised
+airplane/train/airplane_0223_normalised
+airplane/train/airplane_0580_normalised
+airplane/train/airplane_0332_normalised
+airplane/train/airplane_0071_normalised
+airplane/train/airplane_0064_normalised
+airplane/train/airplane_0538_normalised
+airplane/train/airplane_0179_normalised
+airplane/train/airplane_0480_normalised
+airplane/train/airplane_0098_normalised
+airplane/train/airplane_0574_normalised
+airplane/train/airplane_0293_normalised
+airplane/train/airplane_0225_normalised
+airplane/train/airplane_0488_normalised
+airplane/train/airplane_0266_normalised
+airplane/train/airplane_0305_normalised
+airplane/train/airplane_0568_normalised
+airplane/train/airplane_0575_normalised
+airplane/train/airplane_0072_normalised
+airplane/train/airplane_0309_normalised
+airplane/train/airplane_0529_normalised
+airplane/train/airplane_0147_normalised
+airplane/train/airplane_0198_normalised
+airplane/train/airplane_0051_normalised
+airplane/train/airplane_0062_normalised
+airplane/train/airplane_0352_normalised
+airplane/train/airplane_0043_normalised
+airplane/train/airplane_0600_normalised
+airplane/train/airplane_0171_normalised
+airplane/train/airplane_0616_normalised
+airplane/train/airplane_0610_normalised
+airplane/train/airplane_0602_normalised
+airplane/train/airplane_0334_normalised
+airplane/train/airplane_0202_normalised
+airplane/train/airplane_0131_normalised
+airplane/train/airplane_0431_normalised
+airplane/train/airplane_0533_normalised
+airplane/train/airplane_0450_normalised
+airplane/train/airplane_0570_normalised
+airplane/train/airplane_0321_normalised
+airplane/train/airplane_0001_normalised
+airplane/train/airplane_0231_normalised
+airplane/train/airplane_0138_normalised
+airplane/train/airplane_0369_normalised
+airplane/train/airplane_0551_normalised
+airplane/train/airplane_0141_normalised
+airplane/train/airplane_0270_normalised
+airplane/train/airplane_0524_normalised
+airplane/train/airplane_0185_normalised
+airplane/train/airplane_0212_normalised
+airplane/train/airplane_0349_normalised
+airplane/train/airplane_0422_normalised
+airplane/train/airplane_0512_normalised
+airplane/train/airplane_0459_normalised
+airplane/train/airplane_0122_normalised
+airplane/train/airplane_0429_normalised
+airplane/train/airplane_0256_normalised
+airplane/train/airplane_0136_normalised
+airplane/train/airplane_0337_normalised
+airplane/train/airplane_0010_normalised
+airplane/train/airplane_0176_normalised
+airplane/train/airplane_0556_normalised
+airplane/train/airplane_0508_normalised
+airplane/train/airplane_0561_normalised
+airplane/train/airplane_0146_normalised
+airplane/train/airplane_0288_normalised
+airplane/train/airplane_0425_normalised
+airplane/train/airplane_0235_normalised
+airplane/train/airplane_0501_normalised
+airplane/train/airplane_0447_normalised
+airplane/train/airplane_0437_normalised
+airplane/train/airplane_0308_normalised
+airplane/train/airplane_0615_normalised
+airplane/train/airplane_0042_normalised
+airplane/train/airplane_0336_normalised
+airplane/train/airplane_0124_normalised
+airplane/train/airplane_0481_normalised
+airplane/train/airplane_0322_normalised
+airplane/train/airplane_0399_normalised
+airplane/train/airplane_0201_normalised
+airplane/train/airplane_0601_normalised
+airplane/train/airplane_0233_normalised
+airplane/train/airplane_0560_normalised
+airplane/train/airplane_0274_normalised
+airplane/train/airplane_0031_normalised
+airplane/train/airplane_0244_normalised
+airplane/train/airplane_0193_normalised
+airplane/train/airplane_0563_normalised
+airplane/train/airplane_0404_normalised
+airplane/train/airplane_0530_normalised
+airplane/train/airplane_0599_normalised
+airplane/train/airplane_0419_normalised
+airplane/train/airplane_0350_normalised
+airplane/train/airplane_0378_normalised
+airplane/train/airplane_0506_normalised
+airplane/train/airplane_0536_normalised
+airplane/train/airplane_0598_normalised
+airplane/train/airplane_0472_normalised
+airplane/train/airplane_0194_normalised
+airplane/train/airplane_0390_normalised
+airplane/train/airplane_0061_normalised
+airplane/train/airplane_0166_normalised
+airplane/train/airplane_0237_normalised
+airplane/train/airplane_0395_normalised
+airplane/train/airplane_0453_normalised
+airplane/train/airplane_0424_normalised
+airplane/train/airplane_0155_normalised
+airplane/train/airplane_0359_normalised
+airplane/train/airplane_0398_normalised
+airplane/train/airplane_0497_normalised
+airplane/train/airplane_0493_normalised
+airplane/train/airplane_0108_normalised
+airplane/train/airplane_0513_normalised
+airplane/train/airplane_0365_normalised
+airplane/train/airplane_0301_normalised
+airplane/train/airplane_0432_normalised
+airplane/train/airplane_0306_normalised
+airplane/train/airplane_0468_normalised
+airplane/train/airplane_0044_normalised
+airplane/train/airplane_0531_normalised
+airplane/train/airplane_0167_normalised
+airplane/train/airplane_0552_normalised
+airplane/train/airplane_0514_normalised
+airplane/train/airplane_0004_normalised
+airplane/train/airplane_0542_normalised
+airplane/train/airplane_0106_normalised
+airplane/train/airplane_0269_normalised
+airplane/train/airplane_0255_normalised
+airplane/train/airplane_0479_normalised
+airplane/train/airplane_0267_normalised
+airplane/train/airplane_0027_normalised
+airplane/train/airplane_0442_normalised
+airplane/train/airplane_0490_normalised
+airplane/train/airplane_0099_normalised
+airplane/train/airplane_0458_normalised
+airplane/train/airplane_0504_normalised
+airplane/train/airplane_0413_normalised
+airplane/train/airplane_0435_normalised
+airplane/train/airplane_0271_normalised
+airplane/train/airplane_0236_normalised
+airplane/train/airplane_0410_normalised
+airplane/train/airplane_0030_normalised
+airplane/train/airplane_0370_normalised
+airplane/train/airplane_0379_normalised
+airplane/train/airplane_0371_normalised
+airplane/train/airplane_0191_normalised
+airplane/train/airplane_0213_normalised
+airplane/train/airplane_0385_normalised
+airplane/train/airplane_0091_normalised
+airplane/train/airplane_0040_normalised
+airplane/train/airplane_0177_normalised
+airplane/train/airplane_0140_normalised
+airplane/train/airplane_0241_normalised
+airplane/train/airplane_0290_normalised
+airplane/train/airplane_0548_normalised
+airplane/train/airplane_0471_normalised
+airplane/train/airplane_0546_normalised
+airplane/train/airplane_0218_normalised
+airplane/train/airplane_0333_normalised
+airplane/train/airplane_0547_normalised
+airplane/train/airplane_0523_normalised
+airplane/train/airplane_0272_normalised
+airplane/train/airplane_0611_normalised
+airplane/train/airplane_0373_normalised
+airplane/train/airplane_0234_normalised
+airplane/train/airplane_0302_normalised
+airplane/train/airplane_0263_normalised
+airplane/train/airplane_0344_normalised
+airplane/train/airplane_0566_normalised
+airplane/train/airplane_0314_normalised
+airplane/train/airplane_0022_normalised
+airplane/train/airplane_0409_normalised
+airplane/train/airplane_0500_normalised
+airplane/train/airplane_0446_normalised
+airplane/train/airplane_0339_normalised
+airplane/train/airplane_0582_normalised
+airplane/train/airplane_0572_normalised
+airplane/train/airplane_0265_normalised
+airplane/train/airplane_0248_normalised
+airplane/train/airplane_0361_normalised
+airplane/train/airplane_0145_normalised
+airplane/train/airplane_0211_normalised
+airplane/train/airplane_0509_normalised
+airplane/train/airplane_0118_normalised
+airplane/train/airplane_0358_normalised
+airplane/train/airplane_0324_normalised
+airplane/train/airplane_0107_normalised
+airplane/train/airplane_0114_normalised
+airplane/train/airplane_0393_normalised
+airplane/train/airplane_0518_normalised
+airplane/train/airplane_0278_normalised
+airplane/train/airplane_0063_normalised
+airplane/train/airplane_0183_normalised
+airplane/train/airplane_0364_normalised
+airplane/train/airplane_0595_normalised
+airplane/train/airplane_0368_normalised
+airplane/train/airplane_0553_normalised
+airplane/train/airplane_0342_normalised
+airplane/train/airplane_0113_normalised
+airplane/train/airplane_0046_normalised
+airplane/train/airplane_0386_normalised
+airplane/train/airplane_0125_normalised
+airplane/train/airplane_0005_normalised
+airplane/train/airplane_0104_normalised
+airplane/train/airplane_0340_normalised
+airplane/train/airplane_0357_normalised
+airplane/train/airplane_0624_normalised
+airplane/train/airplane_0323_normalised
+airplane/train/airplane_0038_normalised
+airplane/train/airplane_0239_normalised
+airplane/train/airplane_0232_normalised
+airplane/train/airplane_0160_normalised
+airplane/train/airplane_0569_normalised
+airplane/train/airplane_0119_normalised
+airplane/train/airplane_0311_normalised
+airplane/train/airplane_0525_normalised
+airplane/train/airplane_0476_normalised
+airplane/train/airplane_0549_normalised
+airplane/train/airplane_0541_normalised
+airplane/train/airplane_0299_normalised
+airplane/train/airplane_0478_normalised
+airplane/train/airplane_0254_normalised
+airplane/train/airplane_0461_normalised
+airplane/train/airplane_0584_normalised
+airplane/train/airplane_0123_normalised
+airplane/train/airplane_0073_normalised
+airplane/train/airplane_0434_normalised
+airplane/train/airplane_0116_normalised
+airplane/train/airplane_0057_normalised
+airplane/train/airplane_0331_normalised
+airplane/train/airplane_0304_normalised
+airplane/train/airplane_0058_normalised
+airplane/train/airplane_0354_normalised
+airplane/train/airplane_0491_normalised
+airplane/train/airplane_0268_normalised
+airplane/train/airplane_0619_normalised
+airplane/train/airplane_0056_normalised
+airplane/train/airplane_0346_normalised
+airplane/train/airplane_0436_normalised
+airplane/train/airplane_0134_normalised
+airplane/train/airplane_0464_normalised
+airplane/train/airplane_0319_normalised
+airplane/train/airplane_0228_normalised
+airplane/train/airplane_0287_normalised
+airplane/train/airplane_0335_normalised
+airplane/train/airplane_0625_normalised
+airplane/train/airplane_0250_normalised
+airplane/train/airplane_0032_normalised
+airplane/train/airplane_0596_normalised
+airplane/train/airplane_0080_normalised
+airplane/train/airplane_0220_normalised
+airplane/train/airplane_0189_normalised
+airplane/train/airplane_0180_normalised
+airplane/train/airplane_0587_normalised
+airplane/train/airplane_0296_normalised
+airplane/train/airplane_0317_normalised
+airplane/train/airplane_0059_normalised
+airplane/train/airplane_0623_normalised
+airplane/train/airplane_0153_normalised
+airplane/train/airplane_0173_normalised
+airplane/train/airplane_0475_normalised
+airplane/train/airplane_0103_normalised
+airplane/train/airplane_0460_normalised
+airplane/train/airplane_0382_normalised
+airplane/train/airplane_0406_normalised
+airplane/train/airplane_0084_normalised
+airplane/train/airplane_0366_normalised
+airplane/train/airplane_0416_normalised
+airplane/train/airplane_0117_normalised
+airplane/train/airplane_0330_normalised
+airplane/train/airplane_0246_normalised
+airplane/train/airplane_0161_normalised
+airplane/train/airplane_0012_normalised
+airplane/train/airplane_0327_normalised
+airplane/train/airplane_0487_normalised
+airplane/train/airplane_0407_normalised
+airplane/train/airplane_0037_normalised
+airplane/train/airplane_0463_normalised
+airplane/train/airplane_0588_normalised
+airplane/train/airplane_0170_normalised
+airplane/train/airplane_0174_normalised
+airplane/train/airplane_0605_normalised
+airplane/train/airplane_0528_normalised
+airplane/train/airplane_0028_normalised
+airplane/train/airplane_0172_normalised
+airplane/train/airplane_0510_normalised
+airplane/train/airplane_0482_normalised
+airplane/train/airplane_0326_normalised
+airplane/train/airplane_0132_normalised
+airplane/train/airplane_0182_normalised
+airplane/train/airplane_0209_normalised
+airplane/train/airplane_0076_normalised
+airplane/train/airplane_0517_normalised
+airplane/train/airplane_0126_normalised
+airplane/train/airplane_0430_normalised
+airplane/train/airplane_0054_normalised
+airplane/train/airplane_0018_normalised
+airplane/train/airplane_0590_normalised
+airplane/train/airplane_0036_normalised
+airplane/train/airplane_0519_normalised
+airplane/train/airplane_0577_normalised
+airplane/train/airplane_0292_normalised
+airplane/train/airplane_0282_normalised
+airplane/train/airplane_0397_normalised
+airplane/train/airplane_0507_normalised
+airplane/train/airplane_0315_normalised
+airplane/train/airplane_0592_normalised
+airplane/train/airplane_0092_normalised
+airplane/train/airplane_0186_normalised
+airplane/train/airplane_0375_normalised
+airplane/train/airplane_0085_normalised
+airplane/train/airplane_0418_normalised
+airplane/train/airplane_0094_normalised
+airplane/train/airplane_0557_normalised
+airplane/train/airplane_0564_normalised
+airplane/train/airplane_0013_normalised
+airplane/train/airplane_0093_normalised
+airplane/train/airplane_0184_normalised
+airplane/train/airplane_0535_normalised
+airplane/train/airplane_0597_normalised
+airplane/train/airplane_0204_normalised
+airplane/train/airplane_0581_normalised
+airplane/train/airplane_0608_normalised
+airplane/train/airplane_0127_normalised
+airplane/train/airplane_0088_normalised
+airplane/train/airplane_0017_normalised
+airplane/train/airplane_0394_normalised
+airplane/train/airplane_0276_normalised
+airplane/train/airplane_0540_normalised
+airplane/train/airplane_0011_normalised
+airplane/train/airplane_0050_normalised
+airplane/train/airplane_0341_normalised
+airplane/train/airplane_0325_normalised
+airplane/train/airplane_0003_normalised
+airplane/train/airplane_0515_normalised
+airplane/train/airplane_0583_normalised
+airplane/train/airplane_0230_normalised
+airplane/train/airplane_0197_normalised
+airplane/train/airplane_0586_normalised
+airplane/train/airplane_0023_normalised
+airplane/train/airplane_0210_normalised
+airplane/train/airplane_0462_normalised
+airplane/train/airplane_0240_normalised
+airplane/train/airplane_0077_normalised
+airplane/train/airplane_0428_normalised
+airplane/train/airplane_0558_normalised
+airplane/train/airplane_0144_normalised
+airplane/train/airplane_0206_normalised
+airplane/train/airplane_0585_normalised
+airplane/train/airplane_0115_normalised
+airplane/train/airplane_0521_normalised
+airplane/train/airplane_0400_normalised
+airplane/train/airplane_0520_normalised
+airplane/train/airplane_0195_normalised
+airplane/train/airplane_0433_normalised
+airplane/train/airplane_0391_normalised
+airplane/train/airplane_0196_normalised
+airplane/train/airplane_0312_normalised
+airplane/train/airplane_0591_normalised
+airplane/train/airplane_0578_normalised
+airplane/train/airplane_0139_normalised
+airplane/train/airplane_0392_normalised
+airplane/train/airplane_0143_normalised
+airplane/train/airplane_0247_normalised
+airplane/train/airplane_0498_normalised
+airplane/test/airplane_0663_normalised
+airplane/test/airplane_0679_normalised
+airplane/test/airplane_0715_normalised
+airplane/test/airplane_0714_normalised
+airplane/test/airplane_0685_normalised
+airplane/test/airplane_0712_normalised
+airplane/test/airplane_0655_normalised
+airplane/test/airplane_0632_normalised
+airplane/test/airplane_0656_normalised
+airplane/test/airplane_0661_normalised
+airplane/test/airplane_0660_normalised
+airplane/test/airplane_0726_normalised
+airplane/test/airplane_0689_normalised
+airplane/test/airplane_0676_normalised
+airplane/test/airplane_0673_normalised
+airplane/test/airplane_0635_normalised
+airplane/test/airplane_0659_normalised
+airplane/test/airplane_0641_normalised
+airplane/test/airplane_0684_normalised
+airplane/test/airplane_0664_normalised
+airplane/test/airplane_0696_normalised
+airplane/test/airplane_0719_normalised
+airplane/test/airplane_0693_normalised
+airplane/test/airplane_0674_normalised
+airplane/test/airplane_0683_normalised
+airplane/test/airplane_0705_normalised
+airplane/test/airplane_0720_normalised
+airplane/test/airplane_0721_normalised
+airplane/test/airplane_0682_normalised
+airplane/test/airplane_0718_normalised
+airplane/test/airplane_0638_normalised
+airplane/test/airplane_0692_normalised
+airplane/test/airplane_0636_normalised
+airplane/test/airplane_0643_normalised
+airplane/test/airplane_0688_normalised
+airplane/test/airplane_0686_normalised
+airplane/test/airplane_0699_normalised
+airplane/test/airplane_0653_normalised
+airplane/test/airplane_0627_normalised
+airplane/test/airplane_0701_normalised
+airplane/test/airplane_0666_normalised
+airplane/test/airplane_0667_normalised
+airplane/test/airplane_0698_normalised
+airplane/test/airplane_0680_normalised
+airplane/test/airplane_0713_normalised
+airplane/test/airplane_0703_normalised
+airplane/test/airplane_0690_normalised
+airplane/test/airplane_0651_normalised
+airplane/test/airplane_0675_normalised
+airplane/test/airplane_0725_normalised
+airplane/test/airplane_0630_normalised
+airplane/test/airplane_0707_normalised
+airplane/test/airplane_0700_normalised
+airplane/test/airplane_0649_normalised
+airplane/test/airplane_0710_normalised
+airplane/test/airplane_0704_normalised
+airplane/test/airplane_0662_normalised
+airplane/test/airplane_0717_normalised
+airplane/test/airplane_0631_normalised
+airplane/test/airplane_0670_normalised
+airplane/test/airplane_0629_normalised
+airplane/test/airplane_0716_normalised
+airplane/test/airplane_0711_normalised
+airplane/test/airplane_0654_normalised
+airplane/test/airplane_0648_normalised
+airplane/test/airplane_0702_normalised
+airplane/test/airplane_0646_normalised
+airplane/test/airplane_0665_normalised
+airplane/test/airplane_0723_normalised
+airplane/test/airplane_0671_normalised
+airplane/test/airplane_0658_normalised
+airplane/test/airplane_0669_normalised
+airplane/test/airplane_0722_normalised
+airplane/test/airplane_0647_normalised
+airplane/test/airplane_0634_normalised
+airplane/test/airplane_0695_normalised
+airplane/test/airplane_0709_normalised
+airplane/test/airplane_0681_normalised
+airplane/test/airplane_0642_normalised
+airplane/test/airplane_0637_normalised
+airplane/test/airplane_0628_normalised
+airplane/test/airplane_0645_normalised
+airplane/test/airplane_0691_normalised
+airplane/test/airplane_0639_normalised
+airplane/test/airplane_0644_normalised
+airplane/test/airplane_0640_normalised
+airplane/test/airplane_0694_normalised
+airplane/test/airplane_0677_normalised
+airplane/test/airplane_0633_normalised
+airplane/test/airplane_0724_normalised
+airplane/test/airplane_0708_normalised
+airplane/test/airplane_0706_normalised
+airplane/test/airplane_0652_normalised
+airplane/test/airplane_0697_normalised
+airplane/test/airplane_0657_normalised
+airplane/test/airplane_0650_normalised
+airplane/test/airplane_0672_normalised
+airplane/test/airplane_0668_normalised
+airplane/test/airplane_0687_normalised
+airplane/test/airplane_0678_normalised
+bed/train/bed_0256_normalised
+bed/train/bed_0024_normalised
+bed/train/bed_0231_normalised
+bed/train/bed_0274_normalised
+bed/train/bed_0019_normalised
+bed/train/bed_0473_normalised
+bed/train/bed_0479_normalised
+bed/train/bed_0459_normalised
+bed/train/bed_0226_normalised
+bed/train/bed_0510_normalised
+bed/train/bed_0380_normalised
+bed/train/bed_0210_normalised
+bed/train/bed_0330_normalised
+bed/train/bed_0406_normalised
+bed/train/bed_0152_normalised
+bed/train/bed_0247_normalised
+bed/train/bed_0417_normalised
+bed/train/bed_0269_normalised
+bed/train/bed_0183_normalised
+bed/train/bed_0034_normalised
+bed/train/bed_0466_normalised
+bed/train/bed_0098_normalised
+bed/train/bed_0214_normalised
+bed/train/bed_0112_normalised
+bed/train/bed_0312_normalised
+bed/train/bed_0414_normalised
+bed/train/bed_0444_normalised
+bed/train/bed_0289_normalised
+bed/train/bed_0482_normalised
+bed/train/bed_0388_normalised
+bed/train/bed_0039_normalised
+bed/train/bed_0403_normalised
+bed/train/bed_0091_normalised
+bed/train/bed_0254_normalised
+bed/train/bed_0217_normalised
+bed/train/bed_0199_normalised
+bed/train/bed_0151_normalised
+bed/train/bed_0179_normalised
+bed/train/bed_0263_normalised
+bed/train/bed_0347_normalised
+bed/train/bed_0423_normalised
+bed/train/bed_0047_normalised
+bed/train/bed_0198_normalised
+bed/train/bed_0412_normalised
+bed/train/bed_0360_normalised
+bed/train/bed_0215_normalised
+bed/train/bed_0232_normalised
+bed/train/bed_0086_normalised
+bed/train/bed_0244_normalised
+bed/train/bed_0176_normalised
+bed/train/bed_0202_normalised
+bed/train/bed_0381_normalised
+bed/train/bed_0040_normalised
+bed/train/bed_0208_normalised
+bed/train/bed_0372_normalised
+bed/train/bed_0105_normalised
+bed/train/bed_0213_normalised
+bed/train/bed_0509_normalised
+bed/train/bed_0087_normalised
+bed/train/bed_0261_normalised
+bed/train/bed_0113_normalised
+bed/train/bed_0322_normalised
+bed/train/bed_0084_normalised
+bed/train/bed_0300_normalised
+bed/train/bed_0346_normalised
+bed/train/bed_0278_normalised
+bed/train/bed_0445_normalised
+bed/train/bed_0480_normalised
+bed/train/bed_0442_normalised
+bed/train/bed_0374_normalised
+bed/train/bed_0186_normalised
+bed/train/bed_0069_normalised
+bed/train/bed_0295_normalised
+bed/train/bed_0108_normalised
+bed/train/bed_0398_normalised
+bed/train/bed_0235_normalised
+bed/train/bed_0262_normalised
+bed/train/bed_0121_normalised
+bed/train/bed_0138_normalised
+bed/train/bed_0123_normalised
+bed/train/bed_0264_normalised
+bed/train/bed_0006_normalised
+bed/train/bed_0207_normalised
+bed/train/bed_0157_normalised
+bed/train/bed_0177_normalised
+bed/train/bed_0290_normalised
+bed/train/bed_0227_normalised
+bed/train/bed_0204_normalised
+bed/train/bed_0369_normalised
+bed/train/bed_0255_normalised
+bed/train/bed_0106_normalised
+bed/train/bed_0464_normalised
+bed/train/bed_0441_normalised
+bed/train/bed_0476_normalised
+bed/train/bed_0494_normalised
+bed/train/bed_0275_normalised
+bed/train/bed_0209_normalised
+bed/train/bed_0085_normalised
+bed/train/bed_0250_normalised
+bed/train/bed_0059_normalised
+bed/train/bed_0092_normalised
+bed/train/bed_0484_normalised
+bed/train/bed_0161_normalised
+bed/train/bed_0148_normalised
+bed/train/bed_0338_normalised
+bed/train/bed_0373_normalised
+bed/train/bed_0429_normalised
+bed/train/bed_0143_normalised
+bed/train/bed_0259_normalised
+bed/train/bed_0072_normalised
+bed/train/bed_0238_normalised
+bed/train/bed_0499_normalised
+bed/train/bed_0125_normalised
+bed/train/bed_0243_normalised
+bed/train/bed_0014_normalised
+bed/train/bed_0437_normalised
+bed/train/bed_0501_normalised
+bed/train/bed_0149_normalised
+bed/train/bed_0028_normalised
+bed/train/bed_0508_normalised
+bed/train/bed_0058_normalised
+bed/train/bed_0063_normalised
+bed/train/bed_0361_normalised
+bed/train/bed_0162_normalised
+bed/train/bed_0382_normalised
+bed/train/bed_0164_normalised
+bed/train/bed_0422_normalised
+bed/train/bed_0399_normalised
+bed/train/bed_0007_normalised
+bed/train/bed_0200_normalised
+bed/train/bed_0296_normalised
+bed/train/bed_0206_normalised
+bed/train/bed_0513_normalised
+bed/train/bed_0142_normalised
+bed/train/bed_0397_normalised
+bed/train/bed_0158_normalised
+bed/train/bed_0356_normalised
+bed/train/bed_0135_normalised
+bed/train/bed_0428_normalised
+bed/train/bed_0321_normalised
+bed/train/bed_0310_normalised
+bed/train/bed_0337_normalised
+bed/train/bed_0229_normalised
+bed/train/bed_0225_normalised
+bed/train/bed_0396_normalised
+bed/train/bed_0435_normalised
+bed/train/bed_0065_normalised
+bed/train/bed_0379_normalised
+bed/train/bed_0454_normalised
+bed/train/bed_0401_normalised
+bed/train/bed_0491_normalised
+bed/train/bed_0391_normalised
+bed/train/bed_0050_normalised
+bed/train/bed_0432_normalised
+bed/train/bed_0070_normalised
+bed/train/bed_0088_normalised
+bed/train/bed_0390_normalised
+bed/train/bed_0316_normalised
+bed/train/bed_0309_normalised
+bed/train/bed_0068_normalised
+bed/train/bed_0468_normalised
+bed/train/bed_0487_normalised
+bed/train/bed_0377_normalised
+bed/train/bed_0251_normalised
+bed/train/bed_0234_normalised
+bed/train/bed_0122_normalised
+bed/train/bed_0236_normalised
+bed/train/bed_0449_normalised
+bed/train/bed_0450_normalised
+bed/train/bed_0237_normalised
+bed/train/bed_0409_normalised
+bed/train/bed_0258_normalised
+bed/train/bed_0026_normalised
+bed/train/bed_0418_normalised
+bed/train/bed_0440_normalised
+bed/train/bed_0083_normalised
+bed/train/bed_0497_normalised
+bed/train/bed_0283_normalised
+bed/train/bed_0228_normalised
+bed/train/bed_0097_normalised
+bed/train/bed_0172_normalised
+bed/train/bed_0293_normalised
+bed/train/bed_0090_normalised
+bed/train/bed_0498_normalised
+bed/train/bed_0191_normalised
+bed/train/bed_0302_normalised
+bed/train/bed_0016_normalised
+bed/train/bed_0153_normalised
+bed/train/bed_0425_normalised
+bed/train/bed_0294_normalised
+bed/train/bed_0452_normalised
+bed/train/bed_0292_normalised
+bed/train/bed_0064_normalised
+bed/train/bed_0168_normalised
+bed/train/bed_0018_normalised
+bed/train/bed_0169_normalised
+bed/train/bed_0431_normalised
+bed/train/bed_0095_normalised
+bed/train/bed_0467_normalised
+bed/train/bed_0471_normalised
+bed/train/bed_0324_normalised
+bed/train/bed_0277_normalised
+bed/train/bed_0465_normalised
+bed/train/bed_0368_normalised
+bed/train/bed_0266_normalised
+bed/train/bed_0001_normalised
+bed/train/bed_0375_normalised
+bed/train/bed_0419_normalised
+bed/train/bed_0311_normalised
+bed/train/bed_0053_normalised
+bed/train/bed_0504_normalised
+bed/train/bed_0370_normalised
+bed/train/bed_0359_normalised
+bed/train/bed_0140_normalised
+bed/train/bed_0196_normalised
+bed/train/bed_0076_normalised
+bed/train/bed_0486_normalised
+bed/train/bed_0233_normalised
+bed/train/bed_0462_normalised
+bed/train/bed_0389_normalised
+bed/train/bed_0130_normalised
+bed/train/bed_0015_normalised
+bed/train/bed_0071_normalised
+bed/train/bed_0514_normalised
+bed/train/bed_0230_normalised
+bed/train/bed_0308_normalised
+bed/train/bed_0416_normalised
+bed/train/bed_0474_normalised
+bed/train/bed_0045_normalised
+bed/train/bed_0009_normalised
+bed/train/bed_0220_normalised
+bed/train/bed_0386_normalised
+bed/train/bed_0333_normalised
+bed/train/bed_0271_normalised
+bed/train/bed_0502_normalised
+bed/train/bed_0279_normalised
+bed/train/bed_0049_normalised
+bed/train/bed_0469_normalised
+bed/train/bed_0156_normalised
+bed/train/bed_0107_normalised
+bed/train/bed_0131_normalised
+bed/train/bed_0089_normalised
+bed/train/bed_0348_normalised
+bed/train/bed_0031_normalised
+bed/train/bed_0297_normalised
+bed/train/bed_0257_normalised
+bed/train/bed_0137_normalised
+bed/train/bed_0248_normalised
+bed/train/bed_0002_normalised
+bed/train/bed_0500_normalised
+bed/train/bed_0030_normalised
+bed/train/bed_0342_normalised
+bed/train/bed_0493_normalised
+bed/train/bed_0180_normalised
+bed/train/bed_0411_normalised
+bed/train/bed_0400_normalised
+bed/train/bed_0323_normalised
+bed/train/bed_0212_normalised
+bed/train/bed_0287_normalised
+bed/train/bed_0003_normalised
+bed/train/bed_0241_normalised
+bed/train/bed_0453_normalised
+bed/train/bed_0426_normalised
+bed/train/bed_0366_normalised
+bed/train/bed_0410_normalised
+bed/train/bed_0495_normalised
+bed/train/bed_0221_normalised
+bed/train/bed_0011_normalised
+bed/train/bed_0004_normalised
+bed/train/bed_0515_normalised
+bed/train/bed_0281_normalised
+bed/train/bed_0189_normalised
+bed/train/bed_0080_normalised
+bed/train/bed_0242_normalised
+bed/train/bed_0349_normalised
+bed/train/bed_0298_normalised
+bed/train/bed_0027_normalised
+bed/train/bed_0332_normalised
+bed/train/bed_0331_normalised
+bed/train/bed_0132_normalised
+bed/train/bed_0363_normalised
+bed/train/bed_0352_normalised
+bed/train/bed_0096_normalised
+bed/train/bed_0329_normalised
+bed/train/bed_0334_normalised
+bed/train/bed_0194_normalised
+bed/train/bed_0421_normalised
+bed/train/bed_0415_normalised
+bed/train/bed_0430_normalised
+bed/train/bed_0507_normalised
+bed/train/bed_0218_normalised
+bed/train/bed_0012_normalised
+bed/train/bed_0351_normalised
+bed/train/bed_0029_normalised
+bed/train/bed_0365_normalised
+bed/train/bed_0187_normalised
+bed/train/bed_0489_normalised
+bed/train/bed_0060_normalised
+bed/train/bed_0032_normalised
+bed/train/bed_0117_normalised
+bed/train/bed_0075_normalised
+bed/train/bed_0420_normalised
+bed/train/bed_0376_normalised
+bed/train/bed_0239_normalised
+bed/train/bed_0394_normalised
+bed/train/bed_0163_normalised
+bed/train/bed_0013_normalised
+bed/train/bed_0224_normalised
+bed/train/bed_0339_normalised
+bed/train/bed_0078_normalised
+bed/train/bed_0171_normalised
+bed/train/bed_0166_normalised
+bed/train/bed_0114_normalised
+bed/train/bed_0195_normalised
+bed/train/bed_0506_normalised
+bed/train/bed_0355_normalised
+bed/train/bed_0273_normalised
+bed/train/bed_0318_normalised
+bed/train/bed_0126_normalised
+bed/train/bed_0350_normalised
+bed/train/bed_0124_normalised
+bed/train/bed_0427_normalised
+bed/train/bed_0035_normalised
+bed/train/bed_0245_normalised
+bed/train/bed_0005_normalised
+bed/train/bed_0101_normalised
+bed/train/bed_0038_normalised
+bed/train/bed_0182_normalised
+bed/train/bed_0285_normalised
+bed/train/bed_0477_normalised
+bed/train/bed_0336_normalised
+bed/train/bed_0488_normalised
+bed/train/bed_0110_normalised
+bed/train/bed_0252_normalised
+bed/train/bed_0472_normalised
+bed/train/bed_0343_normalised
+bed/train/bed_0364_normalised
+bed/train/bed_0223_normalised
+bed/train/bed_0133_normalised
+bed/train/bed_0461_normalised
+bed/train/bed_0240_normalised
+bed/train/bed_0136_normalised
+bed/train/bed_0127_normalised
+bed/train/bed_0433_normalised
+bed/train/bed_0128_normalised
+bed/train/bed_0211_normalised
+bed/train/bed_0503_normalised
+bed/train/bed_0178_normalised
+bed/train/bed_0276_normalised
+bed/train/bed_0115_normalised
+bed/train/bed_0044_normalised
+bed/train/bed_0328_normalised
+bed/train/bed_0492_normalised
+bed/train/bed_0505_normalised
+bed/train/bed_0301_normalised
+bed/train/bed_0150_normalised
+bed/train/bed_0061_normalised
+bed/train/bed_0042_normalised
+bed/train/bed_0190_normalised
+bed/train/bed_0094_normalised
+bed/train/bed_0129_normalised
+bed/train/bed_0057_normalised
+bed/train/bed_0303_normalised
+bed/train/bed_0458_normalised
+bed/train/bed_0272_normalised
+bed/train/bed_0483_normalised
+bed/train/bed_0307_normalised
+bed/train/bed_0134_normalised
+bed/train/bed_0043_normalised
+bed/train/bed_0017_normalised
+bed/train/bed_0268_normalised
+bed/train/bed_0260_normalised
+bed/train/bed_0008_normalised
+bed/train/bed_0313_normalised
+bed/train/bed_0052_normalised
+bed/train/bed_0056_normalised
+bed/train/bed_0041_normalised
+bed/train/bed_0284_normalised
+bed/train/bed_0304_normalised
+bed/train/bed_0066_normalised
+bed/train/bed_0253_normalised
+bed/train/bed_0446_normalised
+bed/train/bed_0104_normalised
+bed/train/bed_0345_normalised
+bed/train/bed_0246_normalised
+bed/train/bed_0413_normalised
+bed/train/bed_0340_normalised
+bed/train/bed_0305_normalised
+bed/train/bed_0358_normalised
+bed/train/bed_0378_normalised
+bed/train/bed_0512_normalised
+bed/train/bed_0270_normalised
+bed/train/bed_0451_normalised
+bed/train/bed_0455_normalised
+bed/train/bed_0288_normalised
+bed/train/bed_0371_normalised
+bed/train/bed_0282_normalised
+bed/train/bed_0341_normalised
+bed/train/bed_0315_normalised
+bed/train/bed_0188_normalised
+bed/train/bed_0170_normalised
+bed/train/bed_0073_normalised
+bed/train/bed_0319_normalised
+bed/train/bed_0408_normalised
+bed/train/bed_0120_normalised
+bed/train/bed_0033_normalised
+bed/train/bed_0299_normalised
+bed/train/bed_0103_normalised
+bed/train/bed_0010_normalised
+bed/train/bed_0280_normalised
+bed/train/bed_0286_normalised
+bed/train/bed_0141_normalised
+bed/train/bed_0249_normalised
+bed/train/bed_0353_normalised
+bed/train/bed_0357_normalised
+bed/train/bed_0203_normalised
+bed/train/bed_0438_normalised
+bed/train/bed_0478_normalised
+bed/train/bed_0174_normalised
+bed/train/bed_0184_normalised
+bed/train/bed_0100_normalised
+bed/train/bed_0448_normalised
+bed/train/bed_0165_normalised
+bed/train/bed_0384_normalised
+bed/train/bed_0062_normalised
+bed/train/bed_0205_normalised
+bed/train/bed_0109_normalised
+bed/train/bed_0079_normalised
+bed/train/bed_0470_normalised
+bed/train/bed_0155_normalised
+bed/train/bed_0402_normalised
+bed/train/bed_0393_normalised
+bed/train/bed_0222_normalised
+bed/train/bed_0081_normalised
+bed/train/bed_0023_normalised
+bed/train/bed_0463_normalised
+bed/train/bed_0119_normalised
+bed/train/bed_0102_normalised
+bed/train/bed_0496_normalised
+bed/train/bed_0192_normalised
+bed/train/bed_0082_normalised
+bed/train/bed_0116_normalised
+bed/train/bed_0362_normalised
+bed/train/bed_0077_normalised
+bed/train/bed_0439_normalised
+bed/train/bed_0424_normalised
+bed/train/bed_0118_normalised
+bed/train/bed_0485_normalised
+bed/train/bed_0048_normalised
+bed/train/bed_0074_normalised
+bed/train/bed_0146_normalised
+bed/train/bed_0219_normalised
+bed/train/bed_0326_normalised
+bed/train/bed_0093_normalised
+bed/train/bed_0265_normalised
+bed/train/bed_0436_normalised
+bed/train/bed_0020_normalised
+bed/train/bed_0022_normalised
+bed/train/bed_0325_normalised
+bed/train/bed_0291_normalised
+bed/train/bed_0159_normalised
+bed/train/bed_0111_normalised
+bed/train/bed_0144_normalised
+bed/train/bed_0335_normalised
+bed/train/bed_0267_normalised
+bed/train/bed_0317_normalised
+bed/train/bed_0395_normalised
+bed/train/bed_0054_normalised
+bed/train/bed_0456_normalised
+bed/train/bed_0185_normalised
+bed/train/bed_0197_normalised
+bed/train/bed_0099_normalised
+bed/train/bed_0036_normalised
+bed/train/bed_0037_normalised
+bed/train/bed_0481_normalised
+bed/train/bed_0404_normalised
+bed/train/bed_0306_normalised
+bed/train/bed_0443_normalised
+bed/train/bed_0167_normalised
+bed/train/bed_0147_normalised
+bed/train/bed_0490_normalised
+bed/train/bed_0139_normalised
+bed/train/bed_0216_normalised
+bed/train/bed_0320_normalised
+bed/train/bed_0055_normalised
+bed/train/bed_0344_normalised
+bed/train/bed_0145_normalised
+bed/train/bed_0511_normalised
+bed/train/bed_0327_normalised
+bed/train/bed_0407_normalised
+bed/train/bed_0025_normalised
+bed/train/bed_0021_normalised
+bed/train/bed_0367_normalised
+bed/train/bed_0460_normalised
+bed/train/bed_0434_normalised
+bed/train/bed_0173_normalised
+bed/train/bed_0046_normalised
+bed/train/bed_0392_normalised
+bed/train/bed_0154_normalised
+bed/train/bed_0201_normalised
+bed/train/bed_0193_normalised
+bed/train/bed_0447_normalised
+bed/train/bed_0457_normalised
+bed/train/bed_0354_normalised
+bed/train/bed_0314_normalised
+bed/train/bed_0387_normalised
+bed/train/bed_0405_normalised
+bed/train/bed_0067_normalised
+bed/train/bed_0475_normalised
+bed/train/bed_0383_normalised
+bed/train/bed_0175_normalised
+bed/train/bed_0385_normalised
+bed/train/bed_0160_normalised
+bed/train/bed_0181_normalised
+bed/train/bed_0051_normalised
+bed/test/bed_0542_normalised
+bed/test/bed_0517_normalised
+bed/test/bed_0599_normalised
+bed/test/bed_0596_normalised
+bed/test/bed_0606_normalised
+bed/test/bed_0546_normalised
+bed/test/bed_0593_normalised
+bed/test/bed_0550_normalised
+bed/test/bed_0547_normalised
+bed/test/bed_0579_normalised
+bed/test/bed_0565_normalised
+bed/test/bed_0545_normalised
+bed/test/bed_0595_normalised
+bed/test/bed_0532_normalised
+bed/test/bed_0609_normalised
+bed/test/bed_0584_normalised
+bed/test/bed_0533_normalised
+bed/test/bed_0571_normalised
+bed/test/bed_0585_normalised
+bed/test/bed_0568_normalised
+bed/test/bed_0572_normalised
+bed/test/bed_0578_normalised
+bed/test/bed_0588_normalised
+bed/test/bed_0520_normalised
+bed/test/bed_0583_normalised
+bed/test/bed_0541_normalised
+bed/test/bed_0592_normalised
+bed/test/bed_0523_normalised
+bed/test/bed_0516_normalised
+bed/test/bed_0582_normalised
+bed/test/bed_0563_normalised
+bed/test/bed_0519_normalised
+bed/test/bed_0554_normalised
+bed/test/bed_0581_normalised
+bed/test/bed_0525_normalised
+bed/test/bed_0536_normalised
+bed/test/bed_0586_normalised
+bed/test/bed_0527_normalised
+bed/test/bed_0573_normalised
+bed/test/bed_0567_normalised
+bed/test/bed_0543_normalised
+bed/test/bed_0587_normalised
+bed/test/bed_0603_normalised
+bed/test/bed_0549_normalised
+bed/test/bed_0524_normalised
+bed/test/bed_0570_normalised
+bed/test/bed_0521_normalised
+bed/test/bed_0594_normalised
+bed/test/bed_0522_normalised
+bed/test/bed_0589_normalised
+bed/test/bed_0598_normalised
+bed/test/bed_0531_normalised
+bed/test/bed_0566_normalised
+bed/test/bed_0601_normalised
+bed/test/bed_0551_normalised
+bed/test/bed_0530_normalised
+bed/test/bed_0559_normalised
+bed/test/bed_0560_normalised
+bed/test/bed_0615_normalised
+bed/test/bed_0544_normalised
+bed/test/bed_0610_normalised
+bed/test/bed_0518_normalised
+bed/test/bed_0539_normalised
+bed/test/bed_0607_normalised
+bed/test/bed_0535_normalised
+bed/test/bed_0611_normalised
+bed/test/bed_0577_normalised
+bed/test/bed_0602_normalised
+bed/test/bed_0576_normalised
+bed/test/bed_0534_normalised
+bed/test/bed_0538_normalised
+bed/test/bed_0528_normalised
+bed/test/bed_0604_normalised
+bed/test/bed_0562_normalised
+bed/test/bed_0612_normalised
+bed/test/bed_0591_normalised
+bed/test/bed_0555_normalised
+bed/test/bed_0597_normalised
+bed/test/bed_0553_normalised
+bed/test/bed_0558_normalised
+bed/test/bed_0574_normalised
+bed/test/bed_0526_normalised
+bed/test/bed_0556_normalised
+bed/test/bed_0540_normalised
+bed/test/bed_0575_normalised
+bed/test/bed_0580_normalised
+bed/test/bed_0608_normalised
+bed/test/bed_0552_normalised
+bed/test/bed_0600_normalised
+bed/test/bed_0564_normalised
+bed/test/bed_0605_normalised
+bed/test/bed_0613_normalised
+bed/test/bed_0590_normalised
+bed/test/bed_0614_normalised
+bed/test/bed_0561_normalised
+bed/test/bed_0569_normalised
+bed/test/bed_0548_normalised
+bed/test/bed_0557_normalised
+bed/test/bed_0537_normalised
+bed/test/bed_0529_normalised
+stool/train/stool_0068_normalised
+stool/train/stool_0085_normalised
+stool/train/stool_0086_normalised
+stool/train/stool_0012_normalised
+stool/train/stool_0021_normalised
+stool/train/stool_0080_normalised
+stool/train/stool_0053_normalised
+stool/train/stool_0054_normalised
+stool/train/stool_0088_normalised
+stool/train/stool_0056_normalised
+stool/train/stool_0070_normalised
+stool/train/stool_0028_normalised
+stool/train/stool_0030_normalised
+stool/train/stool_0079_normalised
+stool/train/stool_0007_normalised
+stool/train/stool_0033_normalised
+stool/train/stool_0062_normalised
+stool/train/stool_0066_normalised
+stool/train/stool_0071_normalised
+stool/train/stool_0002_normalised
+stool/train/stool_0069_normalised
+stool/train/stool_0084_normalised
+stool/train/stool_0075_normalised
+stool/train/stool_0036_normalised
+stool/train/stool_0045_normalised
+stool/train/stool_0087_normalised
+stool/train/stool_0063_normalised
+stool/train/stool_0017_normalised
+stool/train/stool_0003_normalised
+stool/train/stool_0044_normalised
+stool/train/stool_0061_normalised
+stool/train/stool_0041_normalised
+stool/train/stool_0077_normalised
+stool/train/stool_0010_normalised
+stool/train/stool_0076_normalised
+stool/train/stool_0046_normalised
+stool/train/stool_0055_normalised
+stool/train/stool_0051_normalised
+stool/train/stool_0005_normalised
+stool/train/stool_0043_normalised
+stool/train/stool_0027_normalised
+stool/train/stool_0038_normalised
+stool/train/stool_0073_normalised
+stool/train/stool_0034_normalised
+stool/train/stool_0047_normalised
+stool/train/stool_0048_normalised
+stool/train/stool_0018_normalised
+stool/train/stool_0065_normalised
+stool/train/stool_0014_normalised
+stool/train/stool_0031_normalised
+stool/train/stool_0004_normalised
+stool/train/stool_0026_normalised
+stool/train/stool_0081_normalised
+stool/train/stool_0032_normalised
+stool/train/stool_0008_normalised
+stool/train/stool_0059_normalised
+stool/train/stool_0052_normalised
+stool/train/stool_0082_normalised
+stool/train/stool_0029_normalised
+stool/train/stool_0013_normalised
+stool/train/stool_0067_normalised
+stool/train/stool_0090_normalised
+stool/train/stool_0024_normalised
+stool/train/stool_0042_normalised
+stool/train/stool_0006_normalised
+stool/train/stool_0022_normalised
+stool/train/stool_0037_normalised
+stool/train/stool_0035_normalised
+stool/train/stool_0083_normalised
+stool/train/stool_0058_normalised
+stool/train/stool_0039_normalised
+stool/train/stool_0060_normalised
+stool/train/stool_0016_normalised
+stool/train/stool_0049_normalised
+stool/train/stool_0025_normalised
+stool/train/stool_0089_normalised
+stool/train/stool_0023_normalised
+stool/train/stool_0009_normalised
+stool/train/stool_0057_normalised
+stool/train/stool_0019_normalised
+stool/train/stool_0001_normalised
+stool/train/stool_0074_normalised
+stool/train/stool_0078_normalised
+stool/train/stool_0020_normalised
+stool/train/stool_0050_normalised
+stool/train/stool_0011_normalised
+stool/train/stool_0040_normalised
+stool/train/stool_0064_normalised
+stool/train/stool_0015_normalised
+stool/train/stool_0072_normalised
+stool/test/stool_0092_normalised
+stool/test/stool_0108_normalised
+stool/test/stool_0100_normalised
+stool/test/stool_0110_normalised
+stool/test/stool_0091_normalised
+stool/test/stool_0097_normalised
+stool/test/stool_0103_normalised
+stool/test/stool_0105_normalised
+stool/test/stool_0109_normalised
+stool/test/stool_0095_normalised
+stool/test/stool_0106_normalised
+stool/test/stool_0101_normalised
+stool/test/stool_0098_normalised
+stool/test/stool_0094_normalised
+stool/test/stool_0093_normalised
+stool/test/stool_0099_normalised
+stool/test/stool_0107_normalised
+stool/test/stool_0096_normalised
+stool/test/stool_0102_normalised
+stool/test/stool_0104_normalised
+person/train/person_0042_normalised
+person/train/person_0064_normalised
+person/train/person_0059_normalised
+person/train/person_0045_normalised
+person/train/person_0014_normalised
+person/train/person_0080_normalised
+person/train/person_0025_normalised
+person/train/person_0049_normalised
+person/train/person_0085_normalised
+person/train/person_0007_normalised
+person/train/person_0058_normalised
+person/train/person_0067_normalised
+person/train/person_0047_normalised
+person/train/person_0086_normalised
+person/train/person_0038_normalised
+person/train/person_0066_normalised
+person/train/person_0037_normalised
+person/train/person_0016_normalised
+person/train/person_0074_normalised
+person/train/person_0062_normalised
+person/train/person_0072_normalised
+person/train/person_0001_normalised
+person/train/person_0075_normalised
+person/train/person_0034_normalised
+person/train/person_0020_normalised
+person/train/person_0004_normalised
+person/train/person_0063_normalised
+person/train/person_0010_normalised
+person/train/person_0008_normalised
+person/train/person_0028_normalised
+person/train/person_0065_normalised
+person/train/person_0012_normalised
+person/train/person_0030_normalised
+person/train/person_0050_normalised
+person/train/person_0056_normalised
+person/train/person_0044_normalised
+person/train/person_0051_normalised
+person/train/person_0031_normalised
+person/train/person_0083_normalised
+person/train/person_0035_normalised
+person/train/person_0057_normalised
+person/train/person_0023_normalised
+person/train/person_0039_normalised
+person/train/person_0077_normalised
+person/train/person_0053_normalised
+person/train/person_0017_normalised
+person/train/person_0032_normalised
+person/train/person_0036_normalised
+person/train/person_0055_normalised
+person/train/person_0076_normalised
+person/train/person_0052_normalised
+person/train/person_0003_normalised
+person/train/person_0005_normalised
+person/train/person_0011_normalised
+person/train/person_0033_normalised
+person/train/person_0068_normalised
+person/train/person_0006_normalised
+person/train/person_0013_normalised
+person/train/person_0040_normalised
+person/train/person_0027_normalised
+person/train/person_0070_normalised
+person/train/person_0078_normalised
+person/train/person_0084_normalised
+person/train/person_0054_normalised
+person/train/person_0079_normalised
+person/train/person_0019_normalised
+person/train/person_0043_normalised
+person/train/person_0026_normalised
+person/train/person_0081_normalised
+person/train/person_0024_normalised
+person/train/person_0069_normalised
+person/train/person_0046_normalised
+person/train/person_0022_normalised
+person/train/person_0018_normalised
+person/train/person_0073_normalised
+person/train/person_0088_normalised
+person/train/person_0087_normalised
+person/train/person_0060_normalised
+person/train/person_0015_normalised
+person/train/person_0082_normalised
+person/train/person_0071_normalised
+person/train/person_0061_normalised
+person/train/person_0002_normalised
+person/train/person_0029_normalised
+person/train/person_0041_normalised
+person/train/person_0048_normalised
+person/train/person_0009_normalised
+person/train/person_0021_normalised
+person/test/person_0103_normalised
+person/test/person_0099_normalised
+person/test/person_0094_normalised
+person/test/person_0089_normalised
+person/test/person_0096_normalised
+person/test/person_0092_normalised
+person/test/person_0105_normalised
+person/test/person_0108_normalised
+person/test/person_0097_normalised
+person/test/person_0098_normalised
+person/test/person_0095_normalised
+person/test/person_0101_normalised
+person/test/person_0093_normalised
+person/test/person_0091_normalised
+person/test/person_0100_normalised
+person/test/person_0104_normalised
+person/test/person_0090_normalised
+person/test/person_0102_normalised
+person/test/person_0107_normalised
+person/test/person_0106_normalised
+xbox/train/xbox_0076_normalised
+xbox/train/xbox_0056_normalised
+xbox/train/xbox_0084_normalised
+xbox/train/xbox_0038_normalised
+xbox/train/xbox_0087_normalised
+xbox/train/xbox_0008_normalised
+xbox/train/xbox_0030_normalised
+xbox/train/xbox_0017_normalised
+xbox/train/xbox_0092_normalised
+xbox/train/xbox_0020_normalised
+xbox/train/xbox_0096_normalised
+xbox/train/xbox_0025_normalised
+xbox/train/xbox_0034_normalised
+xbox/train/xbox_0055_normalised
+xbox/train/xbox_0098_normalised
+xbox/train/xbox_0004_normalised
+xbox/train/xbox_0062_normalised
+xbox/train/xbox_0095_normalised
+xbox/train/xbox_0050_normalised
+xbox/train/xbox_0019_normalised
+xbox/train/xbox_0058_normalised
+xbox/train/xbox_0026_normalised
+xbox/train/xbox_0013_normalised
+xbox/train/xbox_0072_normalised
+xbox/train/xbox_0044_normalised
+xbox/train/xbox_0073_normalised
+xbox/train/xbox_0065_normalised
+xbox/train/xbox_0033_normalised
+xbox/train/xbox_0043_normalised
+xbox/train/xbox_0060_normalised
+xbox/train/xbox_0007_normalised
+xbox/train/xbox_0089_normalised
+xbox/train/xbox_0088_normalised
+xbox/train/xbox_0036_normalised
+xbox/train/xbox_0049_normalised
+xbox/train/xbox_0077_normalised
+xbox/train/xbox_0071_normalised
+xbox/train/xbox_0091_normalised
+xbox/train/xbox_0037_normalised
+xbox/train/xbox_0075_normalised
+xbox/train/xbox_0048_normalised
+xbox/train/xbox_0045_normalised
+xbox/train/xbox_0046_normalised
+xbox/train/xbox_0021_normalised
+xbox/train/xbox_0015_normalised
+xbox/train/xbox_0028_normalised
+xbox/train/xbox_0029_normalised
+xbox/train/xbox_0064_normalised
+xbox/train/xbox_0001_normalised
+xbox/train/xbox_0014_normalised
+xbox/train/xbox_0090_normalised
+xbox/train/xbox_0052_normalised
+xbox/train/xbox_0022_normalised
+xbox/train/xbox_0018_normalised
+xbox/train/xbox_0039_normalised
+xbox/train/xbox_0085_normalised
+xbox/train/xbox_0070_normalised
+xbox/train/xbox_0003_normalised
+xbox/train/xbox_0009_normalised
+xbox/train/xbox_0061_normalised
+xbox/train/xbox_0006_normalised
+xbox/train/xbox_0097_normalised
+xbox/train/xbox_0066_normalised
+xbox/train/xbox_0051_normalised
+xbox/train/xbox_0032_normalised
+xbox/train/xbox_0059_normalised
+xbox/train/xbox_0023_normalised
+xbox/train/xbox_0005_normalised
+xbox/train/xbox_0035_normalised
+xbox/train/xbox_0074_normalised
+xbox/train/xbox_0103_normalised
+xbox/train/xbox_0042_normalised
+xbox/train/xbox_0079_normalised
+xbox/train/xbox_0047_normalised
+xbox/train/xbox_0102_normalised
+xbox/train/xbox_0031_normalised
+xbox/train/xbox_0053_normalised
+xbox/train/xbox_0057_normalised
+xbox/train/xbox_0002_normalised
+xbox/train/xbox_0081_normalised
+xbox/train/xbox_0094_normalised
+xbox/train/xbox_0100_normalised
+xbox/train/xbox_0099_normalised
+xbox/train/xbox_0068_normalised
+xbox/train/xbox_0086_normalised
+xbox/train/xbox_0069_normalised
+xbox/train/xbox_0067_normalised
+xbox/train/xbox_0080_normalised
+xbox/train/xbox_0024_normalised
+xbox/train/xbox_0016_normalised
+xbox/train/xbox_0010_normalised
+xbox/train/xbox_0101_normalised
+xbox/train/xbox_0012_normalised
+xbox/train/xbox_0027_normalised
+xbox/train/xbox_0041_normalised
+xbox/train/xbox_0063_normalised
+xbox/train/xbox_0040_normalised
+xbox/train/xbox_0093_normalised
+xbox/train/xbox_0082_normalised
+xbox/train/xbox_0011_normalised
+xbox/train/xbox_0083_normalised
+xbox/train/xbox_0078_normalised
+xbox/train/xbox_0054_normalised
+xbox/test/xbox_0120_normalised
+xbox/test/xbox_0109_normalised
+xbox/test/xbox_0106_normalised
+xbox/test/xbox_0112_normalised
+xbox/test/xbox_0121_normalised
+xbox/test/xbox_0113_normalised
+xbox/test/xbox_0107_normalised
+xbox/test/xbox_0108_normalised
+xbox/test/xbox_0123_normalised
+xbox/test/xbox_0116_normalised
+xbox/test/xbox_0114_normalised
+xbox/test/xbox_0118_normalised
+xbox/test/xbox_0115_normalised
+xbox/test/xbox_0117_normalised
+xbox/test/xbox_0105_normalised
+xbox/test/xbox_0104_normalised
+xbox/test/xbox_0119_normalised
+xbox/test/xbox_0110_normalised
+xbox/test/xbox_0111_normalised
+xbox/test/xbox_0122_normalised
+chair/train/chair_0629_normalised
+chair/train/chair_0099_normalised
+chair/train/chair_0859_normalised
+chair/train/chair_0602_normalised
+chair/train/chair_0362_normalised
+chair/train/chair_0261_normalised
+chair/train/chair_0150_normalised
+chair/train/chair_0651_normalised
+chair/train/chair_0850_normalised
+chair/train/chair_0393_normalised
+chair/train/chair_0579_normalised
+chair/train/chair_0594_normalised
+chair/train/chair_0185_normalised
+chair/train/chair_0761_normalised
+chair/train/chair_0209_normalised
+chair/train/chair_0299_normalised
+chair/train/chair_0210_normalised
+chair/train/chair_0851_normalised
+chair/train/chair_0005_normalised
+chair/train/chair_0522_normalised
+chair/train/chair_0140_normalised
+chair/train/chair_0606_normalised
+chair/train/chair_0499_normalised
+chair/train/chair_0253_normalised
+chair/train/chair_0783_normalised
+chair/train/chair_0401_normalised
+chair/train/chair_0858_normalised
+chair/train/chair_0533_normalised
+chair/train/chair_0790_normalised
+chair/train/chair_0410_normalised
+chair/train/chair_0421_normalised
+chair/train/chair_0620_normalised
+chair/train/chair_0376_normalised
+chair/train/chair_0829_normalised
+chair/train/chair_0574_normalised
+chair/train/chair_0687_normalised
+chair/train/chair_0702_normalised
+chair/train/chair_0876_normalised
+chair/train/chair_0437_normalised
+chair/train/chair_0348_normalised
+chair/train/chair_0338_normalised
+chair/train/chair_0875_normalised
+chair/train/chair_0297_normalised
+chair/train/chair_0818_normalised
+chair/train/chair_0781_normalised
+chair/train/chair_0223_normalised
+chair/train/chair_0354_normalised
+chair/train/chair_0590_normalised
+chair/train/chair_0081_normalised
+chair/train/chair_0663_normalised
+chair/train/chair_0291_normalised
+chair/train/chair_0519_normalised
+chair/train/chair_0105_normalised
+chair/train/chair_0615_normalised
+chair/train/chair_0306_normalised
+chair/train/chair_0383_normalised
+chair/train/chair_0251_normalised
+chair/train/chair_0678_normalised
+chair/train/chair_0597_normalised
+chair/train/chair_0246_normalised
+chair/train/chair_0072_normalised
+chair/train/chair_0144_normalised
+chair/train/chair_0368_normalised
+chair/train/chair_0184_normalised
+chair/train/chair_0637_normalised
+chair/train/chair_0676_normalised
+chair/train/chair_0885_normalised
+chair/train/chair_0459_normalised
+chair/train/chair_0116_normalised
+chair/train/chair_0387_normalised
+chair/train/chair_0043_normalised
+chair/train/chair_0259_normalised
+chair/train/chair_0392_normalised
+chair/train/chair_0016_normalised
+chair/train/chair_0337_normalised
+chair/train/chair_0037_normalised
+chair/train/chair_0252_normalised
+chair/train/chair_0349_normalised
+chair/train/chair_0400_normalised
+chair/train/chair_0248_normalised
+chair/train/chair_0377_normalised
+chair/train/chair_0815_normalised
+chair/train/chair_0616_normalised
+chair/train/chair_0877_normalised
+chair/train/chair_0611_normalised
+chair/train/chair_0386_normalised
+chair/train/chair_0504_normalised
+chair/train/chair_0752_normalised
+chair/train/chair_0588_normalised
+chair/train/chair_0744_normalised
+chair/train/chair_0290_normalised
+chair/train/chair_0225_normalised
+chair/train/chair_0514_normalised
+chair/train/chair_0190_normalised
+chair/train/chair_0142_normalised
+chair/train/chair_0431_normalised
+chair/train/chair_0403_normalised
+chair/train/chair_0260_normalised
+chair/train/chair_0078_normalised
+chair/train/chair_0517_normalised
+chair/train/chair_0090_normalised
+chair/train/chair_0433_normalised
+chair/train/chair_0340_normalised
+chair/train/chair_0494_normalised
+chair/train/chair_0351_normalised
+chair/train/chair_0233_normalised
+chair/train/chair_0496_normalised
+chair/train/chair_0557_normalised
+chair/train/chair_0263_normalised
+chair/train/chair_0776_normalised
+chair/train/chair_0329_normalised
+chair/train/chair_0028_normalised
+chair/train/chair_0469_normalised
+chair/train/chair_0366_normalised
+chair/train/chair_0671_normalised
+chair/train/chair_0326_normalised
+chair/train/chair_0438_normalised
+chair/train/chair_0464_normalised
+chair/train/chair_0228_normalised
+chair/train/chair_0139_normalised
+chair/train/chair_0889_normalised
+chair/train/chair_0278_normalised
+chair/train/chair_0288_normalised
+chair/train/chair_0038_normalised
+chair/train/chair_0415_normalised
+chair/train/chair_0577_normalised
+chair/train/chair_0698_normalised
+chair/train/chair_0235_normalised
+chair/train/chair_0451_normalised
+chair/train/chair_0009_normalised
+chair/train/chair_0457_normalised
+chair/train/chair_0662_normalised
+chair/train/chair_0612_normalised
+chair/train/chair_0175_normalised
+chair/train/chair_0372_normalised
+chair/train/chair_0473_normalised
+chair/train/chair_0644_normalised
+chair/train/chair_0232_normalised
+chair/train/chair_0746_normalised
+chair/train/chair_0632_normalised
+chair/train/chair_0399_normalised
+chair/train/chair_0093_normalised
+chair/train/chair_0520_normalised
+chair/train/chair_0374_normalised
+chair/train/chair_0394_normalised
+chair/train/chair_0855_normalised
+chair/train/chair_0646_normalised
+chair/train/chair_0336_normalised
+chair/train/chair_0388_normalised
+chair/train/chair_0723_normalised
+chair/train/chair_0466_normalised
+chair/train/chair_0019_normalised
+chair/train/chair_0262_normalised
+chair/train/chair_0391_normalised
+chair/train/chair_0532_normalised
+chair/train/chair_0327_normalised
+chair/train/chair_0852_normalised
+chair/train/chair_0529_normalised
+chair/train/chair_0312_normalised
+chair/train/chair_0255_normalised
+chair/train/chair_0243_normalised
+chair/train/chair_0539_normalised
+chair/train/chair_0240_normalised
+chair/train/chair_0808_normalised
+chair/train/chair_0649_normalised
+chair/train/chair_0848_normalised
+chair/train/chair_0020_normalised
+chair/train/chair_0390_normalised
+chair/train/chair_0279_normalised
+chair/train/chair_0046_normalised
+chair/train/chair_0426_normalised
+chair/train/chair_0523_normalised
+chair/train/chair_0180_normalised
+chair/train/chair_0668_normalised
+chair/train/chair_0607_normalised
+chair/train/chair_0430_normalised
+chair/train/chair_0754_normalised
+chair/train/chair_0217_normalised
+chair/train/chair_0788_normalised
+chair/train/chair_0721_normalised
+chair/train/chair_0015_normalised
+chair/train/chair_0817_normalised
+chair/train/chair_0580_normalised
+chair/train/chair_0083_normalised
+chair/train/chair_0186_normalised
+chair/train/chair_0824_normalised
+chair/train/chair_0560_normalised
+chair/train/chair_0044_normalised
+chair/train/chair_0063_normalised
+chair/train/chair_0353_normalised
+chair/train/chair_0481_normalised
+chair/train/chair_0109_normalised
+chair/train/chair_0724_normalised
+chair/train/chair_0095_normalised
+chair/train/chair_0535_normalised
+chair/train/chair_0578_normalised
+chair/train/chair_0027_normalised
+chair/train/chair_0280_normalised
+chair/train/chair_0879_normalised
+chair/train/chair_0358_normalised
+chair/train/chair_0755_normalised
+chair/train/chair_0172_normalised
+chair/train/chair_0042_normalised
+chair/train/chair_0029_normalised
+chair/train/chair_0866_normalised
+chair/train/chair_0684_normalised
+chair/train/chair_0341_normalised
+chair/train/chair_0218_normalised
+chair/train/chair_0103_normalised
+chair/train/chair_0170_normalised
+chair/train/chair_0575_normalised
+chair/train/chair_0156_normalised
+chair/train/chair_0443_normalised
+chair/train/chair_0558_normalised
+chair/train/chair_0622_normalised
+chair/train/chair_0836_normalised
+chair/train/chair_0694_normalised
+chair/train/chair_0826_normalised
+chair/train/chair_0030_normalised
+chair/train/chair_0655_normalised
+chair/train/chair_0604_normalised
+chair/train/chair_0308_normalised
+chair/train/chair_0073_normalised
+chair/train/chair_0205_normalised
+chair/train/chair_0264_normalised
+chair/train/chair_0617_normalised
+chair/train/chair_0465_normalised
+chair/train/chair_0062_normalised
+chair/train/chair_0396_normalised
+chair/train/chair_0061_normalised
+chair/train/chair_0714_normalised
+chair/train/chair_0820_normalised
+chair/train/chair_0477_normalised
+chair/train/chair_0869_normalised
+chair/train/chair_0123_normalised
+chair/train/chair_0508_normalised
+chair/train/chair_0133_normalised
+chair/train/chair_0516_normalised
+chair/train/chair_0641_normalised
+chair/train/chair_0314_normalised
+chair/train/chair_0750_normalised
+chair/train/chair_0511_normalised
+chair/train/chair_0127_normalised
+chair/train/chair_0472_normalised
+chair/train/chair_0730_normalised
+chair/train/chair_0159_normalised
+chair/train/chair_0101_normalised
+chair/train/chair_0441_normalised
+chair/train/chair_0013_normalised
+chair/train/chair_0589_normalised
+chair/train/chair_0685_normalised
+chair/train/chair_0241_normalised
+chair/train/chair_0222_normalised
+chair/train/chair_0849_normalised
+chair/train/chair_0732_normalised
+chair/train/chair_0796_normalised
+chair/train/chair_0108_normalised
+chair/train/chair_0151_normalised
+chair/train/chair_0507_normalised
+chair/train/chair_0488_normalised
+chair/train/chair_0238_normalised
+chair/train/chair_0347_normalised
+chair/train/chair_0741_normalised
+chair/train/chair_0525_normalised
+chair/train/chair_0201_normalised
+chair/train/chair_0160_normalised
+chair/train/chair_0198_normalised
+chair/train/chair_0206_normalised
+chair/train/chair_0084_normalised
+chair/train/chair_0435_normalised
+chair/train/chair_0429_normalised
+chair/train/chair_0302_normalised
+chair/train/chair_0352_normalised
+chair/train/chair_0463_normalised
+chair/train/chair_0113_normalised
+chair/train/chair_0798_normalised
+chair/train/chair_0501_normalised
+chair/train/chair_0273_normalised
+chair/train/chair_0024_normalised
+chair/train/chair_0881_normalised
+chair/train/chair_0018_normalised
+chair/train/chair_0628_normalised
+chair/train/chair_0659_normalised
+chair/train/chair_0689_normalised
+chair/train/chair_0282_normalised
+chair/train/chair_0770_normalised
+chair/train/chair_0461_normalised
+chair/train/chair_0242_normalised
+chair/train/chair_0303_normalised
+chair/train/chair_0624_normalised
+chair/train/chair_0564_normalised
+chair/train/chair_0556_normalised
+chair/train/chair_0106_normalised
+chair/train/chair_0026_normalised
+chair/train/chair_0735_normalised
+chair/train/chair_0695_normalised
+chair/train/chair_0470_normalised
+chair/train/chair_0174_normalised
+chair/train/chair_0865_normalised
+chair/train/chair_0060_normalised
+chair/train/chair_0035_normalised
+chair/train/chair_0234_normalised
+chair/train/chair_0489_normalised
+chair/train/chair_0672_normalised
+chair/train/chair_0313_normalised
+chair/train/chair_0171_normalised
+chair/train/chair_0008_normalised
+chair/train/chair_0661_normalised
+chair/train/chair_0583_normalised
+chair/train/chair_0086_normalised
+chair/train/chair_0295_normalised
+chair/train/chair_0162_normalised
+chair/train/chair_0152_normalised
+chair/train/chair_0271_normalised
+chair/train/chair_0791_normalised
+chair/train/chair_0373_normalised
+chair/train/chair_0397_normalised
+chair/train/chair_0424_normalised
+chair/train/chair_0417_normalised
+chair/train/chair_0854_normalised
+chair/train/chair_0636_normalised
+chair/train/chair_0513_normalised
+chair/train/chair_0074_normalised
+chair/train/chair_0047_normalised
+chair/train/chair_0667_normalised
+chair/train/chair_0014_normalised
+chair/train/chair_0051_normalised
+chair/train/chair_0587_normalised
+chair/train/chair_0811_normalised
+chair/train/chair_0479_normalised
+chair/train/chair_0153_normalised
+chair/train/chair_0549_normalised
+chair/train/chair_0467_normalised
+chair/train/chair_0229_normalised
+chair/train/chair_0143_normalised
+chair/train/chair_0448_normalised
+chair/train/chair_0001_normalised
+chair/train/chair_0169_normalised
+chair/train/chair_0647_normalised
+chair/train/chair_0087_normalised
+chair/train/chair_0454_normalised
+chair/train/chair_0316_normalised
+chair/train/chair_0309_normalised
+chair/train/chair_0654_normalised
+chair/train/chair_0226_normalised
+chair/train/chair_0070_normalised
+chair/train/chair_0056_normalised
+chair/train/chair_0258_normalised
+chair/train/chair_0195_normalised
+chair/train/chair_0591_normalised
+chair/train/chair_0146_normalised
+chair/train/chair_0183_normalised
+chair/train/chair_0318_normalised
+chair/train/chair_0807_normalised
+chair/train/chair_0453_normalised
+chair/train/chair_0091_normalised
+chair/train/chair_0256_normalised
+chair/train/chair_0692_normalised
+chair/train/chair_0526_normalised
+chair/train/chair_0360_normalised
+chair/train/chair_0840_normalised
+chair/train/chair_0269_normalised
+chair/train/chair_0017_normalised
+chair/train/chair_0844_normalised
+chair/train/chair_0482_normalised
+chair/train/chair_0214_normalised
+chair/train/chair_0773_normalised
+chair/train/chair_0179_normalised
+chair/train/chair_0704_normalised
+chair/train/chair_0220_normalised
+chair/train/chair_0131_normalised
+chair/train/chair_0567_normalised
+chair/train/chair_0419_normalised
+chair/train/chair_0398_normalised
+chair/train/chair_0625_normalised
+chair/train/chair_0298_normalised
+chair/train/chair_0384_normalised
+chair/train/chair_0203_normalised
+chair/train/chair_0458_normalised
+chair/train/chair_0089_normalised
+chair/train/chair_0806_normalised
+chair/train/chair_0503_normalised
+chair/train/chair_0274_normalised
+chair/train/chair_0719_normalised
+chair/train/chair_0793_normalised
+chair/train/chair_0289_normalised
+chair/train/chair_0728_normalised
+chair/train/chair_0276_normalised
+chair/train/chair_0837_normalised
+chair/train/chair_0069_normalised
+chair/train/chair_0609_normalised
+chair/train/chair_0452_normalised
+chair/train/chair_0076_normalised
+chair/train/chair_0800_normalised
+chair/train/chair_0192_normalised
+chair/train/chair_0187_normalised
+chair/train/chair_0096_normalised
+chair/train/chair_0592_normalised
+chair/train/chair_0593_normalised
+chair/train/chair_0082_normalised
+chair/train/chair_0231_normalised
+chair/train/chair_0199_normalised
+chair/train/chair_0371_normalised
+chair/train/chair_0792_normalised
+chair/train/chair_0335_normalised
+chair/train/chair_0771_normalised
+chair/train/chair_0285_normalised
+chair/train/chair_0827_normalised
+chair/train/chair_0713_normalised
+chair/train/chair_0102_normalised
+chair/train/chair_0521_normalised
+chair/train/chair_0639_normalised
+chair/train/chair_0870_normalised
+chair/train/chair_0500_normalised
+chair/train/chair_0007_normalised
+chair/train/chair_0816_normalised
+chair/train/chair_0884_normalised
+chair/train/chair_0710_normalised
+chair/train/chair_0370_normalised
+chair/train/chair_0460_normalised
+chair/train/chair_0766_normalised
+chair/train/chair_0789_normalised
+chair/train/chair_0653_normalised
+chair/train/chair_0058_normalised
+chair/train/chair_0708_normalised
+chair/train/chair_0471_normalised
+chair/train/chair_0440_normalised
+chair/train/chair_0334_normalised
+chair/train/chair_0355_normalised
+chair/train/chair_0425_normalised
+chair/train/chair_0427_normalised
+chair/train/chair_0666_normalised
+chair/train/chair_0545_normalised
+chair/train/chair_0733_normalised
+chair/train/chair_0322_normalised
+chair/train/chair_0332_normalised
+chair/train/chair_0534_normalised
+chair/train/chair_0088_normalised
+chair/train/chair_0600_normalised
+chair/train/chair_0033_normalised
+chair/train/chair_0822_normalised
+chair/train/chair_0640_normalised
+chair/train/chair_0565_normalised
+chair/train/chair_0795_normalised
+chair/train/chair_0204_normalised
+chair/train/chair_0227_normalised
+chair/train/chair_0328_normalised
+chair/train/chair_0527_normalised
+chair/train/chair_0167_normalised
+chair/train/chair_0841_normalised
+chair/train/chair_0857_normalised
+chair/train/chair_0320_normalised
+chair/train/chair_0157_normalised
+chair/train/chair_0847_normalised
+chair/train/chair_0310_normalised
+chair/train/chair_0307_normalised
+chair/train/chair_0495_normalised
+chair/train/chair_0883_normalised
+chair/train/chair_0237_normalised
+chair/train/chair_0068_normalised
+chair/train/chair_0748_normalised
+chair/train/chair_0097_normalised
+chair/train/chair_0012_normalised
+chair/train/chair_0550_normalised
+chair/train/chair_0882_normalised
+chair/train/chair_0509_normalised
+chair/train/chair_0054_normalised
+chair/train/chair_0601_normalised
+chair/train/chair_0546_normalised
+chair/train/chair_0486_normalised
+chair/train/chair_0753_normalised
+chair/train/chair_0100_normalised
+chair/train/chair_0701_normalised
+chair/train/chair_0420_normalised
+chair/train/chair_0305_normalised
+chair/train/chair_0809_normalised
+chair/train/chair_0128_normalised
+chair/train/chair_0277_normalised
+chair/train/chair_0480_normalised
+chair/train/chair_0779_normalised
+chair/train/chair_0468_normalised
+chair/train/chair_0518_normalised
+chair/train/chair_0369_normalised
+chair/train/chair_0768_normalised
+chair/train/chair_0738_normalised
+chair/train/chair_0098_normalised
+chair/train/chair_0135_normalised
+chair/train/chair_0691_normalised
+chair/train/chair_0445_normalised
+chair/train/chair_0212_normalised
+chair/train/chair_0561_normalised
+chair/train/chair_0734_normalised
+chair/train/chair_0104_normalised
+chair/train/chair_0404_normalised
+chair/train/chair_0803_normalised
+chair/train/chair_0439_normalised
+chair/train/chair_0812_normalised
+chair/train/chair_0365_normalised
+chair/train/chair_0860_normalised
+chair/train/chair_0774_normalised
+chair/train/chair_0436_normalised
+chair/train/chair_0036_normalised
+chair/train/chair_0618_normalised
+chair/train/chair_0745_normalised
+chair/train/chair_0207_normalised
+chair/train/chair_0638_normalised
+chair/train/chair_0342_normalised
+chair/train/chair_0787_normalised
+chair/train/chair_0887_normalised
+chair/train/chair_0506_normalised
+chair/train/chair_0188_normalised
+chair/train/chair_0447_normalised
+chair/train/chair_0270_normalised
+chair/train/chair_0756_normalised
+chair/train/chair_0677_normalised
+chair/train/chair_0408_normalised
+chair/train/chair_0208_normalised
+chair/train/chair_0442_normalised
+chair/train/chair_0835_normalised
+chair/train/chair_0598_normalised
+chair/train/chair_0130_normalised
+chair/train/chair_0196_normalised
+chair/train/chair_0221_normalised
+chair/train/chair_0155_normalised
+chair/train/chair_0287_normalised
+chair/train/chair_0797_normalised
+chair/train/chair_0572_normalised
+chair/train/chair_0003_normalised
+chair/train/chair_0658_normalised
+chair/train/chair_0711_normalised
+chair/train/chair_0548_normalised
+chair/train/chair_0634_normalised
+chair/train/chair_0483_normalised
+chair/train/chair_0823_normalised
+chair/train/chair_0414_normalised
+chair/train/chair_0552_normalised
+chair/train/chair_0568_normalised
+chair/train/chair_0530_normalised
+chair/train/chair_0541_normalised
+chair/train/chair_0751_normalised
+chair/train/chair_0474_normalised
+chair/train/chair_0832_normalised
+chair/train/chair_0434_normalised
+chair/train/chair_0147_normalised
+chair/train/chair_0720_normalised
+chair/train/chair_0149_normalised
+chair/train/chair_0510_normalised
+chair/train/chair_0177_normalised
+chair/train/chair_0537_normalised
+chair/train/chair_0428_normalised
+chair/train/chair_0121_normalised
+chair/train/chair_0163_normalised
+chair/train/chair_0543_normalised
+chair/train/chair_0886_normalised
+chair/train/chair_0512_normalised
+chair/train/chair_0838_normalised
+chair/train/chair_0758_normalised
+chair/train/chair_0762_normalised
+chair/train/chair_0742_normalised
+chair/train/chair_0048_normalised
+chair/train/chair_0266_normalised
+chair/train/chair_0554_normalised
+chair/train/chair_0178_normalised
+chair/train/chair_0344_normalised
+chair/train/chair_0379_normalised
+chair/train/chair_0164_normalised
+chair/train/chair_0551_normalised
+chair/train/chair_0039_normalised
+chair/train/chair_0491_normalised
+chair/train/chair_0717_normalised
+chair/train/chair_0418_normalised
+chair/train/chair_0834_normalised
+chair/train/chair_0880_normalised
+chair/train/chair_0819_normalised
+chair/train/chair_0378_normalised
+chair/train/chair_0562_normalised
+chair/train/chair_0114_normalised
+chair/train/chair_0784_normalised
+chair/train/chair_0528_normalised
+chair/train/chair_0120_normalised
+chair/train/chair_0767_normalised
+chair/train/chair_0077_normalised
+chair/train/chair_0216_normalised
+chair/train/chair_0006_normalised
+chair/train/chair_0333_normalised
+chair/train/chair_0536_normalised
+chair/train/chair_0524_normalised
+chair/train/chair_0323_normalised
+chair/train/chair_0361_normalised
+chair/train/chair_0193_normalised
+chair/train/chair_0122_normalised
+chair/train/chair_0079_normalised
+chair/train/chair_0065_normalised
+chair/train/chair_0700_normalised
+chair/train/chair_0669_normalised
+chair/train/chair_0760_normalised
+chair/train/chair_0802_normalised
+chair/train/chair_0141_normalised
+chair/train/chair_0389_normalised
+chair/train/chair_0595_normalised
+chair/train/chair_0265_normalised
+chair/train/chair_0555_normalised
+chair/train/chair_0747_normalised
+chair/train/chair_0825_normalised
+chair/train/chair_0423_normalised
+chair/train/chair_0244_normalised
+chair/train/chair_0581_normalised
+chair/train/chair_0416_normalised
+chair/train/chair_0782_normalised
+chair/train/chair_0656_normalised
+chair/train/chair_0004_normalised
+chair/train/chair_0821_normalised
+chair/train/chair_0861_normalised
+chair/train/chair_0189_normalised
+chair/train/chair_0540_normalised
+chair/train/chair_0346_normalised
+chair/train/chair_0045_normalised
+chair/train/chair_0868_normalised
+chair/train/chair_0475_normalised
+chair/train/chair_0213_normalised
+chair/train/chair_0124_normalised
+chair/train/chair_0531_normalised
+chair/train/chair_0080_normalised
+chair/train/chair_0422_normalised
+chair/train/chair_0066_normalised
+chair/train/chair_0191_normalised
+chair/train/chair_0853_normalised
+chair/train/chair_0703_normalised
+chair/train/chair_0722_normalised
+chair/train/chair_0092_normalised
+chair/train/chair_0161_normalised
+chair/train/chair_0117_normalised
+chair/train/chair_0757_normalised
+chair/train/chair_0134_normalised
+chair/train/chair_0635_normalised
+chair/train/chair_0502_normalised
+chair/train/chair_0450_normalised
+chair/train/chair_0680_normalised
+chair/train/chair_0411_normalised
+chair/train/chair_0067_normalised
+chair/train/chair_0584_normalised
+chair/train/chair_0432_normalised
+chair/train/chair_0573_normalised
+chair/train/chair_0094_normalised
+chair/train/chair_0842_normalised
+chair/train/chair_0737_normalised
+chair/train/chair_0367_normalised
+chair/train/chair_0158_normalised
+chair/train/chair_0296_normalised
+chair/train/chair_0786_normalised
+chair/train/chair_0538_normalised
+chair/train/chair_0449_normalised
+chair/train/chair_0219_normalised
+chair/train/chair_0645_normalised
+chair/train/chair_0707_normalised
+chair/train/chair_0743_normalised
+chair/train/chair_0022_normalised
+chair/train/chair_0268_normalised
+chair/train/chair_0686_normalised
+chair/train/chair_0176_normalised
+chair/train/chair_0239_normalised
+chair/train/chair_0110_normalised
+chair/train/chair_0633_normalised
+chair/train/chair_0490_normalised
+chair/train/chair_0126_normalised
+chair/train/chair_0874_normalised
+chair/train/chair_0780_normalised
+chair/train/chair_0785_normalised
+chair/train/chair_0563_normalised
+chair/train/chair_0833_normalised
+chair/train/chair_0614_normalised
+chair/train/chair_0505_normalised
+chair/train/chair_0843_normalised
+chair/train/chair_0648_normalised
+chair/train/chair_0331_normalised
+chair/train/chair_0215_normalised
+chair/train/chair_0544_normalised
+chair/train/chair_0025_normalised
+chair/train/chair_0856_normalised
+chair/train/chair_0049_normalised
+chair/train/chair_0631_normalised
+chair/train/chair_0119_normalised
+chair/train/chair_0888_normalised
+chair/train/chair_0382_normalised
+chair/train/chair_0395_normalised
+chair/train/chair_0317_normalised
+chair/train/chair_0846_normalised
+chair/train/chair_0569_normalised
+chair/train/chair_0726_normalised
+chair/train/chair_0444_normalised
+chair/train/chair_0696_normalised
+chair/train/chair_0727_normalised
+chair/train/chair_0810_normalised
+chair/train/chair_0660_normalised
+chair/train/chair_0381_normalised
+chair/train/chair_0492_normalised
+chair/train/chair_0002_normalised
+chair/train/chair_0799_normalised
+chair/train/chair_0173_normalised
+chair/train/chair_0319_normalised
+chair/train/chair_0409_normalised
+chair/train/chair_0357_normalised
+chair/train/chair_0621_normalised
+chair/train/chair_0830_normalised
+chair/train/chair_0679_normalised
+chair/train/chair_0740_normalised
+chair/train/chair_0202_normalised
+chair/train/chair_0111_normalised
+chair/train/chair_0284_normalised
+chair/train/chair_0129_normalised
+chair/train/chair_0032_normalised
+chair/train/chair_0627_normalised
+chair/train/chair_0498_normalised
+chair/train/chair_0281_normalised
+chair/train/chair_0497_normalised
+chair/train/chair_0630_normalised
+chair/train/chair_0311_normalised
+chair/train/chair_0197_normalised
+chair/train/chair_0878_normalised
+chair/train/chair_0665_normalised
+chair/train/chair_0115_normalised
+chair/train/chair_0688_normalised
+chair/train/chair_0375_normalised
+chair/train/chair_0053_normalised
+chair/train/chair_0055_normalised
+chair/train/chair_0286_normalised
+chair/train/chair_0585_normalised
+chair/train/chair_0455_normalised
+chair/train/chair_0828_normalised
+chair/train/chair_0112_normalised
+chair/train/chair_0716_normalised
+chair/train/chair_0670_normalised
+chair/train/chair_0247_normalised
+chair/train/chair_0254_normalised
+chair/train/chair_0769_normalised
+chair/train/chair_0324_normalised
+chair/train/chair_0873_normalised
+chair/train/chair_0775_normalised
+chair/train/chair_0586_normalised
+chair/train/chair_0619_normalised
+chair/train/chair_0559_normalised
+chair/train/chair_0863_normalised
+chair/train/chair_0293_normalised
+chair/train/chair_0613_normalised
+chair/train/chair_0831_normalised
+chair/train/chair_0057_normalised
+chair/train/chair_0736_normalised
+chair/train/chair_0011_normalised
+chair/train/chair_0759_normalised
+chair/train/chair_0674_normalised
+chair/train/chair_0642_normalised
+chair/train/chair_0813_normalised
+chair/train/chair_0034_normalised
+chair/train/chair_0211_normalised
+chair/train/chair_0570_normalised
+chair/train/chair_0118_normalised
+chair/train/chair_0675_normalised
+chair/train/chair_0683_normalised
+chair/train/chair_0553_normalised
+chair/train/chair_0085_normalised
+chair/train/chair_0462_normalised
+chair/train/chair_0699_normalised
+chair/train/chair_0138_normalised
+chair/train/chair_0245_normalised
+chair/train/chair_0052_normalised
+chair/train/chair_0712_normalised
+chair/train/chair_0697_normalised
+chair/train/chair_0867_normalised
+chair/train/chair_0596_normalised
+chair/train/chair_0485_normalised
+chair/train/chair_0257_normalised
+chair/train/chair_0801_normalised
+chair/train/chair_0693_normalised
+chair/train/chair_0023_normalised
+chair/train/chair_0731_normalised
+chair/train/chair_0566_normalised
+chair/train/chair_0194_normalised
+chair/train/chair_0643_normalised
+chair/train/chair_0325_normalised
+chair/train/chair_0582_normalised
+chair/train/chair_0814_normalised
+chair/train/chair_0657_normalised
+chair/train/chair_0075_normalised
+chair/train/chair_0839_normalised
+chair/train/chair_0652_normalised
+chair/train/chair_0872_normalised
+chair/train/chair_0605_normalised
+chair/train/chair_0706_normalised
+chair/train/chair_0739_normalised
+chair/train/chair_0343_normalised
+chair/train/chair_0542_normalised
+chair/train/chair_0402_normalised
+chair/train/chair_0764_normalised
+chair/train/chair_0339_normalised
+chair/train/chair_0267_normalised
+chair/train/chair_0603_normalised
+chair/train/chair_0547_normalised
+chair/train/chair_0608_normalised
+chair/train/chair_0040_normalised
+chair/train/chair_0690_normalised
+chair/train/chair_0132_normalised
+chair/train/chair_0345_normalised
+chair/train/chair_0107_normalised
+chair/train/chair_0405_normalised
+chair/train/chair_0064_normalised
+chair/train/chair_0673_normalised
+chair/train/chair_0749_normalised
+chair/train/chair_0794_normalised
+chair/train/chair_0871_normalised
+chair/train/chair_0484_normalised
+chair/train/chair_0413_normalised
+chair/train/chair_0778_normalised
+chair/train/chair_0385_normalised
+chair/train/chair_0493_normalised
+chair/train/chair_0571_normalised
+chair/train/chair_0315_normalised
+chair/train/chair_0041_normalised
+chair/train/chair_0446_normalised
+chair/train/chair_0350_normalised
+chair/train/chair_0145_normalised
+chair/train/chair_0705_normalised
+chair/train/chair_0154_normalised
+chair/train/chair_0363_normalised
+chair/train/chair_0845_normalised
+chair/train/chair_0230_normalised
+chair/train/chair_0137_normalised
+chair/train/chair_0200_normalised
+chair/train/chair_0359_normalised
+chair/train/chair_0478_normalised
+chair/train/chair_0456_normalised
+chair/train/chair_0182_normalised
+chair/train/chair_0626_normalised
+chair/train/chair_0576_normalised
+chair/train/chair_0380_normalised
+chair/train/chair_0772_normalised
+chair/train/chair_0250_normalised
+chair/train/chair_0487_normalised
+chair/train/chair_0715_normalised
+chair/train/chair_0224_normalised
+chair/train/chair_0862_normalised
+chair/train/chair_0300_normalised
+chair/train/chair_0071_normalised
+chair/train/chair_0864_normalised
+chair/train/chair_0031_normalised
+chair/train/chair_0599_normalised
+chair/train/chair_0292_normalised
+chair/train/chair_0805_normalised
+chair/train/chair_0729_normalised
+chair/train/chair_0283_normalised
+chair/train/chair_0623_normalised
+chair/train/chair_0166_normalised
+chair/train/chair_0412_normalised
+chair/train/chair_0763_normalised
+chair/train/chair_0515_normalised
+chair/train/chair_0125_normalised
+chair/train/chair_0249_normalised
+chair/train/chair_0709_normalised
+chair/train/chair_0050_normalised
+chair/train/chair_0664_normalised
+chair/train/chair_0168_normalised
+chair/train/chair_0725_normalised
+chair/train/chair_0804_normalised
+chair/train/chair_0364_normalised
+chair/train/chair_0681_normalised
+chair/train/chair_0021_normalised
+chair/train/chair_0010_normalised
+chair/train/chair_0765_normalised
+chair/train/chair_0610_normalised
+chair/train/chair_0165_normalised
+chair/train/chair_0148_normalised
+chair/train/chair_0356_normalised
+chair/train/chair_0181_normalised
+chair/train/chair_0682_normalised
+chair/train/chair_0294_normalised
+chair/train/chair_0059_normalised
+chair/train/chair_0236_normalised
+chair/train/chair_0275_normalised
+chair/train/chair_0272_normalised
+chair/train/chair_0330_normalised
+chair/train/chair_0304_normalised
+chair/train/chair_0718_normalised
+chair/train/chair_0136_normalised
+chair/train/chair_0301_normalised
+chair/train/chair_0407_normalised
+chair/train/chair_0406_normalised
+chair/train/chair_0476_normalised
+chair/train/chair_0650_normalised
+chair/train/chair_0321_normalised
+chair/train/chair_0777_normalised
+chair/test/chair_0916_normalised
+chair/test/chair_0900_normalised
+chair/test/chair_0984_normalised
+chair/test/chair_0983_normalised
+chair/test/chair_0937_normalised
+chair/test/chair_0964_normalised
+chair/test/chair_0914_normalised
+chair/test/chair_0891_normalised
+chair/test/chair_0976_normalised
+chair/test/chair_0910_normalised
+chair/test/chair_0924_normalised
+chair/test/chair_0929_normalised
+chair/test/chair_0930_normalised
+chair/test/chair_0949_normalised
+chair/test/chair_0927_normalised
+chair/test/chair_0911_normalised
+chair/test/chair_0973_normalised
+chair/test/chair_0987_normalised
+chair/test/chair_0915_normalised
+chair/test/chair_0902_normalised
+chair/test/chair_0942_normalised
+chair/test/chair_0953_normalised
+chair/test/chair_0892_normalised
+chair/test/chair_0969_normalised
+chair/test/chair_0962_normalised
+chair/test/chair_0977_normalised
+chair/test/chair_0982_normalised
+chair/test/chair_0901_normalised
+chair/test/chair_0960_normalised
+chair/test/chair_0957_normalised
+chair/test/chair_0978_normalised
+chair/test/chair_0904_normalised
+chair/test/chair_0968_normalised
+chair/test/chair_0913_normalised
+chair/test/chair_0938_normalised
+chair/test/chair_0945_normalised
+chair/test/chair_0925_normalised
+chair/test/chair_0931_normalised
+chair/test/chair_0988_normalised
+chair/test/chair_0932_normalised
+chair/test/chair_0897_normalised
+chair/test/chair_0941_normalised
+chair/test/chair_0975_normalised
+chair/test/chair_0952_normalised
+chair/test/chair_0919_normalised
+chair/test/chair_0922_normalised
+chair/test/chair_0917_normalised
+chair/test/chair_0940_normalised
+chair/test/chair_0943_normalised
+chair/test/chair_0906_normalised
+chair/test/chair_0989_normalised
+chair/test/chair_0944_normalised
+chair/test/chair_0903_normalised
+chair/test/chair_0979_normalised
+chair/test/chair_0939_normalised
+chair/test/chair_0933_normalised
+chair/test/chair_0980_normalised
+chair/test/chair_0967_normalised
+chair/test/chair_0965_normalised
+chair/test/chair_0986_normalised
+chair/test/chair_0934_normalised
+chair/test/chair_0890_normalised
+chair/test/chair_0955_normalised
+chair/test/chair_0966_normalised
+chair/test/chair_0895_normalised
+chair/test/chair_0899_normalised
+chair/test/chair_0896_normalised
+chair/test/chair_0981_normalised
+chair/test/chair_0971_normalised
+chair/test/chair_0926_normalised
+chair/test/chair_0909_normalised
+chair/test/chair_0970_normalised
+chair/test/chair_0920_normalised
+chair/test/chair_0961_normalised
+chair/test/chair_0921_normalised
+chair/test/chair_0948_normalised
+chair/test/chair_0908_normalised
+chair/test/chair_0898_normalised
+chair/test/chair_0905_normalised
+chair/test/chair_0918_normalised
+chair/test/chair_0985_normalised
+chair/test/chair_0972_normalised
+chair/test/chair_0935_normalised
+chair/test/chair_0958_normalised
+chair/test/chair_0946_normalised
+chair/test/chair_0907_normalised
+chair/test/chair_0936_normalised
+chair/test/chair_0923_normalised
+chair/test/chair_0959_normalised
+chair/test/chair_0963_normalised
+chair/test/chair_0893_normalised
+chair/test/chair_0947_normalised
+chair/test/chair_0954_normalised
+chair/test/chair_0956_normalised
+chair/test/chair_0951_normalised
+chair/test/chair_0912_normalised
+chair/test/chair_0950_normalised
+chair/test/chair_0974_normalised
+chair/test/chair_0928_normalised
+chair/test/chair_0894_normalised
+flower_pot/train/flower_pot_0089_normalised
+flower_pot/train/flower_pot_0101_normalised
+flower_pot/train/flower_pot_0030_normalised
+flower_pot/train/flower_pot_0145_normalised
+flower_pot/train/flower_pot_0116_normalised
+flower_pot/train/flower_pot_0091_normalised
+flower_pot/train/flower_pot_0029_normalised
+flower_pot/train/flower_pot_0141_normalised
+flower_pot/train/flower_pot_0121_normalised
+flower_pot/train/flower_pot_0069_normalised
+flower_pot/train/flower_pot_0045_normalised
+flower_pot/train/flower_pot_0126_normalised
+flower_pot/train/flower_pot_0084_normalised
+flower_pot/train/flower_pot_0088_normalised
+flower_pot/train/flower_pot_0100_normalised
+flower_pot/train/flower_pot_0093_normalised
+flower_pot/train/flower_pot_0010_normalised
+flower_pot/train/flower_pot_0044_normalised
+flower_pot/train/flower_pot_0039_normalised
+flower_pot/train/flower_pot_0122_normalised
+flower_pot/train/flower_pot_0078_normalised
+flower_pot/train/flower_pot_0142_normalised
+flower_pot/train/flower_pot_0017_normalised
+flower_pot/train/flower_pot_0066_normalised
+flower_pot/train/flower_pot_0071_normalised
+flower_pot/train/flower_pot_0132_normalised
+flower_pot/train/flower_pot_0027_normalised
+flower_pot/train/flower_pot_0092_normalised
+flower_pot/train/flower_pot_0035_normalised
+flower_pot/train/flower_pot_0009_normalised
+flower_pot/train/flower_pot_0137_normalised
+flower_pot/train/flower_pot_0083_normalised
+flower_pot/train/flower_pot_0001_normalised
+flower_pot/train/flower_pot_0149_normalised
+flower_pot/train/flower_pot_0085_normalised
+flower_pot/train/flower_pot_0086_normalised
+flower_pot/train/flower_pot_0074_normalised
+flower_pot/train/flower_pot_0038_normalised
+flower_pot/train/flower_pot_0081_normalised
+flower_pot/train/flower_pot_0131_normalised
+flower_pot/train/flower_pot_0063_normalised
+flower_pot/train/flower_pot_0095_normalised
+flower_pot/train/flower_pot_0065_normalised
+flower_pot/train/flower_pot_0060_normalised
+flower_pot/train/flower_pot_0013_normalised
+flower_pot/train/flower_pot_0053_normalised
+flower_pot/train/flower_pot_0068_normalised
+flower_pot/train/flower_pot_0124_normalised
+flower_pot/train/flower_pot_0052_normalised
+flower_pot/train/flower_pot_0070_normalised
+flower_pot/train/flower_pot_0006_normalised
+flower_pot/train/flower_pot_0075_normalised
+flower_pot/train/flower_pot_0087_normalised
+flower_pot/train/flower_pot_0096_normalised
+flower_pot/train/flower_pot_0080_normalised
+flower_pot/train/flower_pot_0057_normalised
+flower_pot/train/flower_pot_0012_normalised
+flower_pot/train/flower_pot_0133_normalised
+flower_pot/train/flower_pot_0072_normalised
+flower_pot/train/flower_pot_0011_normalised
+flower_pot/train/flower_pot_0105_normalised
+flower_pot/train/flower_pot_0028_normalised
+flower_pot/train/flower_pot_0008_normalised
+flower_pot/train/flower_pot_0062_normalised
+flower_pot/train/flower_pot_0049_normalised
+flower_pot/train/flower_pot_0021_normalised
+flower_pot/train/flower_pot_0031_normalised
+flower_pot/train/flower_pot_0090_normalised
+flower_pot/train/flower_pot_0067_normalised
+flower_pot/train/flower_pot_0102_normalised
+flower_pot/train/flower_pot_0033_normalised
+flower_pot/train/flower_pot_0016_normalised
+flower_pot/train/flower_pot_0111_normalised
+flower_pot/train/flower_pot_0043_normalised
+flower_pot/train/flower_pot_0004_normalised
+flower_pot/train/flower_pot_0002_normalised
+flower_pot/train/flower_pot_0104_normalised
+flower_pot/train/flower_pot_0019_normalised
+flower_pot/train/flower_pot_0036_normalised
+flower_pot/train/flower_pot_0128_normalised
+flower_pot/train/flower_pot_0056_normalised
+flower_pot/train/flower_pot_0115_normalised
+flower_pot/train/flower_pot_0106_normalised
+flower_pot/train/flower_pot_0134_normalised
+flower_pot/train/flower_pot_0146_normalised
+flower_pot/train/flower_pot_0144_normalised
+flower_pot/train/flower_pot_0003_normalised
+flower_pot/train/flower_pot_0079_normalised
+flower_pot/train/flower_pot_0048_normalised
+flower_pot/train/flower_pot_0034_normalised
+flower_pot/train/flower_pot_0107_normalised
+flower_pot/train/flower_pot_0059_normalised
+flower_pot/train/flower_pot_0007_normalised
+flower_pot/train/flower_pot_0076_normalised
+flower_pot/train/flower_pot_0136_normalised
+flower_pot/train/flower_pot_0051_normalised
+flower_pot/train/flower_pot_0098_normalised
+flower_pot/train/flower_pot_0118_normalised
+flower_pot/train/flower_pot_0073_normalised
+flower_pot/train/flower_pot_0108_normalised
+flower_pot/train/flower_pot_0109_normalised
+flower_pot/train/flower_pot_0129_normalised
+flower_pot/train/flower_pot_0050_normalised
+flower_pot/train/flower_pot_0026_normalised
+flower_pot/train/flower_pot_0112_normalised
+flower_pot/train/flower_pot_0018_normalised
+flower_pot/train/flower_pot_0041_normalised
+flower_pot/train/flower_pot_0140_normalised
+flower_pot/train/flower_pot_0054_normalised
+flower_pot/train/flower_pot_0032_normalised
+flower_pot/train/flower_pot_0061_normalised
+flower_pot/train/flower_pot_0135_normalised
+flower_pot/train/flower_pot_0046_normalised
+flower_pot/train/flower_pot_0103_normalised
+flower_pot/train/flower_pot_0082_normalised
+flower_pot/train/flower_pot_0024_normalised
+flower_pot/train/flower_pot_0025_normalised
+flower_pot/train/flower_pot_0120_normalised
+flower_pot/train/flower_pot_0097_normalised
+flower_pot/train/flower_pot_0014_normalised
+flower_pot/train/flower_pot_0119_normalised
+flower_pot/train/flower_pot_0015_normalised
+flower_pot/train/flower_pot_0147_normalised
+flower_pot/train/flower_pot_0148_normalised
+flower_pot/train/flower_pot_0064_normalised
+flower_pot/train/flower_pot_0055_normalised
+flower_pot/train/flower_pot_0099_normalised
+flower_pot/train/flower_pot_0094_normalised
+flower_pot/train/flower_pot_0127_normalised
+flower_pot/train/flower_pot_0139_normalised
+flower_pot/train/flower_pot_0040_normalised
+flower_pot/train/flower_pot_0138_normalised
+flower_pot/train/flower_pot_0113_normalised
+flower_pot/train/flower_pot_0077_normalised
+flower_pot/train/flower_pot_0058_normalised
+flower_pot/train/flower_pot_0117_normalised
+flower_pot/train/flower_pot_0005_normalised
+flower_pot/train/flower_pot_0037_normalised
+flower_pot/train/flower_pot_0110_normalised
+flower_pot/train/flower_pot_0125_normalised
+flower_pot/train/flower_pot_0020_normalised
+flower_pot/train/flower_pot_0123_normalised
+flower_pot/train/flower_pot_0047_normalised
+flower_pot/train/flower_pot_0143_normalised
+flower_pot/train/flower_pot_0042_normalised
+flower_pot/train/flower_pot_0114_normalised
+flower_pot/train/flower_pot_0023_normalised
+flower_pot/train/flower_pot_0022_normalised
+flower_pot/train/flower_pot_0130_normalised
+flower_pot/test/flower_pot_0156_normalised
+flower_pot/test/flower_pot_0157_normalised
+flower_pot/test/flower_pot_0166_normalised
+flower_pot/test/flower_pot_0160_normalised
+flower_pot/test/flower_pot_0161_normalised
+flower_pot/test/flower_pot_0154_normalised
+flower_pot/test/flower_pot_0151_normalised
+flower_pot/test/flower_pot_0163_normalised
+flower_pot/test/flower_pot_0165_normalised
+flower_pot/test/flower_pot_0164_normalised
+flower_pot/test/flower_pot_0150_normalised
+flower_pot/test/flower_pot_0155_normalised
+flower_pot/test/flower_pot_0168_normalised
+flower_pot/test/flower_pot_0167_normalised
+flower_pot/test/flower_pot_0169_normalised
+flower_pot/test/flower_pot_0153_normalised
+flower_pot/test/flower_pot_0158_normalised
+flower_pot/test/flower_pot_0159_normalised
+flower_pot/test/flower_pot_0162_normalised
+flower_pot/test/flower_pot_0152_normalised
+toilet/train/toilet_0209_normalised
+toilet/train/toilet_0081_normalised
+toilet/train/toilet_0181_normalised
+toilet/train/toilet_0095_normalised
+toilet/train/toilet_0032_normalised
+toilet/train/toilet_0062_normalised
+toilet/train/toilet_0106_normalised
+toilet/train/toilet_0094_normalised
+toilet/train/toilet_0053_normalised
+toilet/train/toilet_0282_normalised
+toilet/train/toilet_0025_normalised
+toilet/train/toilet_0242_normalised
+toilet/train/toilet_0196_normalised
+toilet/train/toilet_0015_normalised
+toilet/train/toilet_0008_normalised
+toilet/train/toilet_0140_normalised
+toilet/train/toilet_0195_normalised
+toilet/train/toilet_0299_normalised
+toilet/train/toilet_0250_normalised
+toilet/train/toilet_0215_normalised
+toilet/train/toilet_0076_normalised
+toilet/train/toilet_0338_normalised
+toilet/train/toilet_0017_normalised
+toilet/train/toilet_0026_normalised
+toilet/train/toilet_0084_normalised
+toilet/train/toilet_0126_normalised
+toilet/train/toilet_0247_normalised
+toilet/train/toilet_0079_normalised
+toilet/train/toilet_0306_normalised
+toilet/train/toilet_0231_normalised
+toilet/train/toilet_0204_normalised
+toilet/train/toilet_0260_normalised
+toilet/train/toilet_0336_normalised
+toilet/train/toilet_0002_normalised
+toilet/train/toilet_0030_normalised
+toilet/train/toilet_0009_normalised
+toilet/train/toilet_0125_normalised
+toilet/train/toilet_0280_normalised
+toilet/train/toilet_0266_normalised
+toilet/train/toilet_0274_normalised
+toilet/train/toilet_0043_normalised
+toilet/train/toilet_0185_normalised
+toilet/train/toilet_0326_normalised
+toilet/train/toilet_0277_normalised
+toilet/train/toilet_0292_normalised
+toilet/train/toilet_0310_normalised
+toilet/train/toilet_0198_normalised
+toilet/train/toilet_0205_normalised
+toilet/train/toilet_0093_normalised
+toilet/train/toilet_0138_normalised
+toilet/train/toilet_0044_normalised
+toilet/train/toilet_0199_normalised
+toilet/train/toilet_0163_normalised
+toilet/train/toilet_0201_normalised
+toilet/train/toilet_0295_normalised
+toilet/train/toilet_0089_normalised
+toilet/train/toilet_0134_normalised
+toilet/train/toilet_0021_normalised
+toilet/train/toilet_0234_normalised
+toilet/train/toilet_0080_normalised
+toilet/train/toilet_0165_normalised
+toilet/train/toilet_0133_normalised
+toilet/train/toilet_0272_normalised
+toilet/train/toilet_0171_normalised
+toilet/train/toilet_0259_normalised
+toilet/train/toilet_0136_normalised
+toilet/train/toilet_0064_normalised
+toilet/train/toilet_0186_normalised
+toilet/train/toilet_0283_normalised
+toilet/train/toilet_0323_normalised
+toilet/train/toilet_0219_normalised
+toilet/train/toilet_0342_normalised
+toilet/train/toilet_0311_normalised
+toilet/train/toilet_0039_normalised
+toilet/train/toilet_0168_normalised
+toilet/train/toilet_0031_normalised
+toilet/train/toilet_0013_normalised
+toilet/train/toilet_0285_normalised
+toilet/train/toilet_0246_normalised
+toilet/train/toilet_0343_normalised
+toilet/train/toilet_0091_normalised
+toilet/train/toilet_0287_normalised
+toilet/train/toilet_0249_normalised
+toilet/train/toilet_0301_normalised
+toilet/train/toilet_0257_normalised
+toilet/train/toilet_0232_normalised
+toilet/train/toilet_0069_normalised
+toilet/train/toilet_0220_normalised
+toilet/train/toilet_0121_normalised
+toilet/train/toilet_0010_normalised
+toilet/train/toilet_0120_normalised
+toilet/train/toilet_0300_normalised
+toilet/train/toilet_0038_normalised
+toilet/train/toilet_0238_normalised
+toilet/train/toilet_0308_normalised
+toilet/train/toilet_0154_normalised
+toilet/train/toilet_0132_normalised
+toilet/train/toilet_0035_normalised
+toilet/train/toilet_0214_normalised
+toilet/train/toilet_0271_normalised
+toilet/train/toilet_0221_normalised
+toilet/train/toilet_0110_normalised
+toilet/train/toilet_0122_normalised
+toilet/train/toilet_0131_normalised
+toilet/train/toilet_0243_normalised
+toilet/train/toilet_0335_normalised
+toilet/train/toilet_0296_normalised
+toilet/train/toilet_0135_normalised
+toilet/train/toilet_0114_normalised
+toilet/train/toilet_0085_normalised
+toilet/train/toilet_0078_normalised
+toilet/train/toilet_0083_normalised
+toilet/train/toilet_0222_normalised
+toilet/train/toilet_0048_normalised
+toilet/train/toilet_0228_normalised
+toilet/train/toilet_0029_normalised
+toilet/train/toilet_0184_normalised
+toilet/train/toilet_0158_normalised
+toilet/train/toilet_0146_normalised
+toilet/train/toilet_0004_normalised
+toilet/train/toilet_0202_normalised
+toilet/train/toilet_0318_normalised
+toilet/train/toilet_0177_normalised
+toilet/train/toilet_0203_normalised
+toilet/train/toilet_0067_normalised
+toilet/train/toilet_0124_normalised
+toilet/train/toilet_0273_normalised
+toilet/train/toilet_0019_normalised
+toilet/train/toilet_0276_normalised
+toilet/train/toilet_0049_normalised
+toilet/train/toilet_0041_normalised
+toilet/train/toilet_0328_normalised
+toilet/train/toilet_0190_normalised
+toilet/train/toilet_0057_normalised
+toilet/train/toilet_0099_normalised
+toilet/train/toilet_0332_normalised
+toilet/train/toilet_0111_normalised
+toilet/train/toilet_0016_normalised
+toilet/train/toilet_0291_normalised
+toilet/train/toilet_0001_normalised
+toilet/train/toilet_0262_normalised
+toilet/train/toilet_0334_normalised
+toilet/train/toilet_0224_normalised
+toilet/train/toilet_0327_normalised
+toilet/train/toilet_0223_normalised
+toilet/train/toilet_0156_normalised
+toilet/train/toilet_0073_normalised
+toilet/train/toilet_0147_normalised
+toilet/train/toilet_0155_normalised
+toilet/train/toilet_0101_normalised
+toilet/train/toilet_0269_normalised
+toilet/train/toilet_0312_normalised
+toilet/train/toilet_0261_normalised
+toilet/train/toilet_0022_normalised
+toilet/train/toilet_0108_normalised
+toilet/train/toilet_0118_normalised
+toilet/train/toilet_0197_normalised
+toilet/train/toilet_0317_normalised
+toilet/train/toilet_0339_normalised
+toilet/train/toilet_0173_normalised
+toilet/train/toilet_0281_normalised
+toilet/train/toilet_0096_normalised
+toilet/train/toilet_0244_normalised
+toilet/train/toilet_0104_normalised
+toilet/train/toilet_0023_normalised
+toilet/train/toilet_0191_normalised
+toilet/train/toilet_0127_normalised
+toilet/train/toilet_0005_normalised
+toilet/train/toilet_0183_normalised
+toilet/train/toilet_0063_normalised
+toilet/train/toilet_0256_normalised
+toilet/train/toilet_0105_normalised
+toilet/train/toilet_0059_normalised
+toilet/train/toilet_0254_normalised
+toilet/train/toilet_0267_normalised
+toilet/train/toilet_0047_normalised
+toilet/train/toilet_0123_normalised
+toilet/train/toilet_0268_normalised
+toilet/train/toilet_0098_normalised
+toilet/train/toilet_0248_normalised
+toilet/train/toilet_0208_normalised
+toilet/train/toilet_0143_normalised
+toilet/train/toilet_0322_normalised
+toilet/train/toilet_0279_normalised
+toilet/train/toilet_0264_normalised
+toilet/train/toilet_0068_normalised
+toilet/train/toilet_0187_normalised
+toilet/train/toilet_0040_normalised
+toilet/train/toilet_0193_normalised
+toilet/train/toilet_0192_normalised
+toilet/train/toilet_0340_normalised
+toilet/train/toilet_0011_normalised
+toilet/train/toilet_0075_normalised
+toilet/train/toilet_0227_normalised
+toilet/train/toilet_0066_normalised
+toilet/train/toilet_0152_normalised
+toilet/train/toilet_0252_normalised
+toilet/train/toilet_0284_normalised
+toilet/train/toilet_0229_normalised
+toilet/train/toilet_0046_normalised
+toilet/train/toilet_0129_normalised
+toilet/train/toilet_0236_normalised
+toilet/train/toilet_0082_normalised
+toilet/train/toilet_0178_normalised
+toilet/train/toilet_0074_normalised
+toilet/train/toilet_0302_normalised
+toilet/train/toilet_0225_normalised
+toilet/train/toilet_0012_normalised
+toilet/train/toilet_0052_normalised
+toilet/train/toilet_0130_normalised
+toilet/train/toilet_0309_normalised
+toilet/train/toilet_0325_normalised
+toilet/train/toilet_0018_normalised
+toilet/train/toilet_0321_normalised
+toilet/train/toilet_0003_normalised
+toilet/train/toilet_0241_normalised
+toilet/train/toilet_0112_normalised
+toilet/train/toilet_0344_normalised
+toilet/train/toilet_0270_normalised
+toilet/train/toilet_0115_normalised
+toilet/train/toilet_0113_normalised
+toilet/train/toilet_0139_normalised
+toilet/train/toilet_0167_normalised
+toilet/train/toilet_0037_normalised
+toilet/train/toilet_0330_normalised
+toilet/train/toilet_0055_normalised
+toilet/train/toilet_0313_normalised
+toilet/train/toilet_0045_normalised
+toilet/train/toilet_0086_normalised
+toilet/train/toilet_0278_normalised
+toilet/train/toilet_0007_normalised
+toilet/train/toilet_0027_normalised
+toilet/train/toilet_0151_normalised
+toilet/train/toilet_0307_normalised
+toilet/train/toilet_0297_normalised
+toilet/train/toilet_0251_normalised
+toilet/train/toilet_0294_normalised
+toilet/train/toilet_0150_normalised
+toilet/train/toilet_0090_normalised
+toilet/train/toilet_0207_normalised
+toilet/train/toilet_0157_normalised
+toilet/train/toilet_0071_normalised
+toilet/train/toilet_0200_normalised
+toilet/train/toilet_0148_normalised
+toilet/train/toilet_0162_normalised
+toilet/train/toilet_0117_normalised
+toilet/train/toilet_0051_normalised
+toilet/train/toilet_0142_normalised
+toilet/train/toilet_0233_normalised
+toilet/train/toilet_0235_normalised
+toilet/train/toilet_0164_normalised
+toilet/train/toilet_0304_normalised
+toilet/train/toilet_0119_normalised
+toilet/train/toilet_0329_normalised
+toilet/train/toilet_0216_normalised
+toilet/train/toilet_0175_normalised
+toilet/train/toilet_0288_normalised
+toilet/train/toilet_0237_normalised
+toilet/train/toilet_0170_normalised
+toilet/train/toilet_0060_normalised
+toilet/train/toilet_0240_normalised
+toilet/train/toilet_0206_normalised
+toilet/train/toilet_0218_normalised
+toilet/train/toilet_0303_normalised
+toilet/train/toilet_0182_normalised
+toilet/train/toilet_0042_normalised
+toilet/train/toilet_0161_normalised
+toilet/train/toilet_0103_normalised
+toilet/train/toilet_0239_normalised
+toilet/train/toilet_0159_normalised
+toilet/train/toilet_0166_normalised
+toilet/train/toilet_0128_normalised
+toilet/train/toilet_0070_normalised
+toilet/train/toilet_0341_normalised
+toilet/train/toilet_0314_normalised
+toilet/train/toilet_0061_normalised
+toilet/train/toilet_0109_normalised
+toilet/train/toilet_0006_normalised
+toilet/train/toilet_0265_normalised
+toilet/train/toilet_0100_normalised
+toilet/train/toilet_0324_normalised
+toilet/train/toilet_0333_normalised
+toilet/train/toilet_0107_normalised
+toilet/train/toilet_0050_normalised
+toilet/train/toilet_0315_normalised
+toilet/train/toilet_0092_normalised
+toilet/train/toilet_0054_normalised
+toilet/train/toilet_0174_normalised
+toilet/train/toilet_0213_normalised
+toilet/train/toilet_0065_normalised
+toilet/train/toilet_0145_normalised
+toilet/train/toilet_0144_normalised
+toilet/train/toilet_0097_normalised
+toilet/train/toilet_0275_normalised
+toilet/train/toilet_0217_normalised
+toilet/train/toilet_0180_normalised
+toilet/train/toilet_0149_normalised
+toilet/train/toilet_0289_normalised
+toilet/train/toilet_0088_normalised
+toilet/train/toilet_0172_normalised
+toilet/train/toilet_0160_normalised
+toilet/train/toilet_0188_normalised
+toilet/train/toilet_0316_normalised
+toilet/train/toilet_0226_normalised
+toilet/train/toilet_0058_normalised
+toilet/train/toilet_0102_normalised
+toilet/train/toilet_0293_normalised
+toilet/train/toilet_0153_normalised
+toilet/train/toilet_0255_normalised
+toilet/train/toilet_0056_normalised
+toilet/train/toilet_0212_normalised
+toilet/train/toilet_0298_normalised
+toilet/train/toilet_0141_normalised
+toilet/train/toilet_0211_normalised
+toilet/train/toilet_0286_normalised
+toilet/train/toilet_0014_normalised
+toilet/train/toilet_0320_normalised
+toilet/train/toilet_0169_normalised
+toilet/train/toilet_0036_normalised
+toilet/train/toilet_0258_normalised
+toilet/train/toilet_0137_normalised
+toilet/train/toilet_0072_normalised
+toilet/train/toilet_0331_normalised
+toilet/train/toilet_0263_normalised
+toilet/train/toilet_0305_normalised
+toilet/train/toilet_0245_normalised
+toilet/train/toilet_0230_normalised
+toilet/train/toilet_0028_normalised
+toilet/train/toilet_0116_normalised
+toilet/train/toilet_0087_normalised
+toilet/train/toilet_0290_normalised
+toilet/train/toilet_0337_normalised
+toilet/train/toilet_0034_normalised
+toilet/train/toilet_0077_normalised
+toilet/train/toilet_0210_normalised
+toilet/train/toilet_0179_normalised
+toilet/train/toilet_0020_normalised
+toilet/train/toilet_0194_normalised
+toilet/train/toilet_0024_normalised
+toilet/train/toilet_0176_normalised
+toilet/train/toilet_0189_normalised
+toilet/train/toilet_0253_normalised
+toilet/train/toilet_0033_normalised
+toilet/train/toilet_0319_normalised
+toilet/test/toilet_0356_normalised
+toilet/test/toilet_0413_normalised
+toilet/test/toilet_0371_normalised
+toilet/test/toilet_0443_normalised
+toilet/test/toilet_0367_normalised
+toilet/test/toilet_0349_normalised
+toilet/test/toilet_0385_normalised
+toilet/test/toilet_0392_normalised
+toilet/test/toilet_0399_normalised
+toilet/test/toilet_0429_normalised
+toilet/test/toilet_0387_normalised
+toilet/test/toilet_0420_normalised
+toilet/test/toilet_0375_normalised
+toilet/test/toilet_0434_normalised
+toilet/test/toilet_0351_normalised
+toilet/test/toilet_0421_normalised
+toilet/test/toilet_0400_normalised
+toilet/test/toilet_0440_normalised
+toilet/test/toilet_0398_normalised
+toilet/test/toilet_0396_normalised
+toilet/test/toilet_0354_normalised
+toilet/test/toilet_0384_normalised
+toilet/test/toilet_0386_normalised
+toilet/test/toilet_0353_normalised
+toilet/test/toilet_0373_normalised
+toilet/test/toilet_0405_normalised
+toilet/test/toilet_0347_normalised
+toilet/test/toilet_0428_normalised
+toilet/test/toilet_0411_normalised
+toilet/test/toilet_0412_normalised
+toilet/test/toilet_0408_normalised
+toilet/test/toilet_0391_normalised
+toilet/test/toilet_0401_normalised
+toilet/test/toilet_0381_normalised
+toilet/test/toilet_0403_normalised
+toilet/test/toilet_0383_normalised
+toilet/test/toilet_0346_normalised
+toilet/test/toilet_0423_normalised
+toilet/test/toilet_0389_normalised
+toilet/test/toilet_0404_normalised
+toilet/test/toilet_0406_normalised
+toilet/test/toilet_0431_normalised
+toilet/test/toilet_0433_normalised
+toilet/test/toilet_0418_normalised
+toilet/test/toilet_0361_normalised
+toilet/test/toilet_0363_normalised
+toilet/test/toilet_0415_normalised
+toilet/test/toilet_0382_normalised
+toilet/test/toilet_0388_normalised
+toilet/test/toilet_0365_normalised
+toilet/test/toilet_0416_normalised
+toilet/test/toilet_0379_normalised
+toilet/test/toilet_0393_normalised
+toilet/test/toilet_0424_normalised
+toilet/test/toilet_0369_normalised
+toilet/test/toilet_0394_normalised
+toilet/test/toilet_0390_normalised
+toilet/test/toilet_0422_normalised
+toilet/test/toilet_0380_normalised
+toilet/test/toilet_0439_normalised
+toilet/test/toilet_0402_normalised
+toilet/test/toilet_0368_normalised
+toilet/test/toilet_0364_normalised
+toilet/test/toilet_0426_normalised
+toilet/test/toilet_0410_normalised
+toilet/test/toilet_0430_normalised
+toilet/test/toilet_0414_normalised
+toilet/test/toilet_0427_normalised
+toilet/test/toilet_0348_normalised
+toilet/test/toilet_0359_normalised
+toilet/test/toilet_0419_normalised
+toilet/test/toilet_0438_normalised
+toilet/test/toilet_0425_normalised
+toilet/test/toilet_0358_normalised
+toilet/test/toilet_0352_normalised
+toilet/test/toilet_0374_normalised
+toilet/test/toilet_0417_normalised
+toilet/test/toilet_0357_normalised
+toilet/test/toilet_0362_normalised
+toilet/test/toilet_0436_normalised
+toilet/test/toilet_0370_normalised
+toilet/test/toilet_0407_normalised
+toilet/test/toilet_0376_normalised
+toilet/test/toilet_0366_normalised
+toilet/test/toilet_0442_normalised
+toilet/test/toilet_0437_normalised
+toilet/test/toilet_0409_normalised
+toilet/test/toilet_0372_normalised
+toilet/test/toilet_0360_normalised
+toilet/test/toilet_0432_normalised
+toilet/test/toilet_0345_normalised
+toilet/test/toilet_0350_normalised
+toilet/test/toilet_0441_normalised
+toilet/test/toilet_0444_normalised
+toilet/test/toilet_0355_normalised
+toilet/test/toilet_0397_normalised
+toilet/test/toilet_0435_normalised
+toilet/test/toilet_0378_normalised
+toilet/test/toilet_0395_normalised
+toilet/test/toilet_0377_normalised
diff --git a/zoo/OcCo/render/PC_Normalisation.py b/zoo/OcCo/render/PC_Normalisation.py
new file mode 100644
index 0000000..0a9f5b9
--- /dev/null
+++ b/zoo/OcCo/render/PC_Normalisation.py
@@ -0,0 +1,23 @@
+# Copyright (c) 2020. Hanchen Wang, hw501@cam.ac.uk
+
+import os, open3d, numpy as np
+
+File_ = open('ModelNet_flist_short.txt', 'w')
+
+if __name__ == "__main__":
+ root_dir = "../data/ModelNet_subset/"
+
+ for root, dirs, files in os.walk(root_dir, topdown=False):
+ for file in files:
+ if '.ply' in file:
+ amesh = open3d.io.read_triangle_mesh(os.path.join(root, file))
+ out_file_name = os.path.join(root, file).replace('.ply', '_normalised.obj')
+
+ center = amesh.get_center()
+ amesh.translate(-center)
+ maxR = (np.asarray(amesh.vertices)**2).sum(axis=1).max()**(1/2)
+ # we found divided by (2*maxR) has best rendered visualisation results
+ amesh.scale(1/(2*maxR))
+ open3d.io.write_triangle_mesh(out_file_name, amesh)
+ File_.writelines(out_file_name.replace('.obj', '').replace(root_dir, '') + '\n')
+ print(out_file_name)
diff --git a/zoo/OcCo/render/buffer.png b/zoo/OcCo/render/buffer.png
new file mode 100644
index 0000000..5a8d011
Binary files /dev/null and b/zoo/OcCo/render/buffer.png differ
diff --git a/zoo/OcCo/render/readme.md b/zoo/OcCo/render/readme.md
new file mode 100644
index 0000000..8322996
--- /dev/null
+++ b/zoo/OcCo/render/readme.md
@@ -0,0 +1,42 @@
+This directory contains code that generates partial point clouds objects.
+
+To start with:
+
+1. Download and Install [Blender](https://blender.org/download/)
+
+2. Create a list of normalized 3D objects to be rendered, which should be in `.obj` format, we provide `ModelNet_Flist.txt`. as a template. We also provide PC_Normalisation.py for normalization.
+
+3. To generate the rendered depth image from 3d objects (you might need to install a few more supportive packages, i.e. `Imath, OpenEXR`, due to the differences in the development environments)
+
+ ```bash
+ # blender -b -P Depth_Renderer.py [data directory] [file list] [output directory] [num scans per model]
+
+ blender -b -P render_depth.py ../data/modelnet40 ModelNet_Flist.txt ./dump 10
+ ```
+
+ The generated intermediate files are in OpenEXR format (`*.exr`). You can also modify the intrinsics of the camera model in Depth_Renderer.py, which will be automatically saved in the `intrinsics.txt`.
+
+4. To re-project the partial occluded point cloud from the depth image:
+
+ ```Β bash
+ python EXR_Process.py \
+ --list_file ModelNet_Flist.txt \
+ --intrinsics intrinsics.txt \
+ --output_dir ./dump \
+ --num_scans 10 ;
+ ```
+
+ This will convert the `*.exr` files into depth images (`*.png`) then point clouds (`*.pcd`)
+
+5. Now use OcCo_Torch/utils/LMDB_Writer.py to convert all the `pcd` files into `lmdb` dataloader:
+
+ ```bash
+ python LMDB_Writer.py \
+ --list_path ../render/ModelNet_Flist.txt \
+ --complete_dir ../data/modelnet40 \
+ --partial_dir ../render/dump/pcd \
+ --num_scans 10 \
+ --output_file ../data/MyTrain.lmdb ;
+ ```
+
+6. Now you can pre-train the models via OcCo on your own constructed data, enjoy :)
diff --git a/zoo/OcCo/sample/CMakeLists.txt b/zoo/OcCo/sample/CMakeLists.txt
new file mode 100644
index 0000000..d7f9b62
--- /dev/null
+++ b/zoo/OcCo/sample/CMakeLists.txt
@@ -0,0 +1,14 @@
+cmake_minimum_required(VERSION 3.0)
+
+project(mesh_sampling)
+
+find_package(PCL 1.7 REQUIRED)
+include_directories(${PCL_INCLUDE_DIRS})
+link_directories(${PCL_LIBRARY_DIRS})
+add_definitions(${PCL_DEFINITIONS})
+
+find_package(VTK 7.0 REQUIRED)
+include(${VTK_USE_FILE})
+
+add_executable (mesh_sampling mesh_sampling.cpp)
+target_link_libraries (mesh_sampling ${PCL_LIBRARIES} ${VTK_LIBRARIES})
diff --git a/zoo/OcCo/sample/mesh_sampling.cpp b/zoo/OcCo/sample/mesh_sampling.cpp
new file mode 100644
index 0000000..c168905
--- /dev/null
+++ b/zoo/OcCo/sample/mesh_sampling.cpp
@@ -0,0 +1,298 @@
+/*
+ * Software License Agreement (BSD License)
+ *
+ * Point Cloud Library (PCL) - www.pointclouds.org
+ * Copyright (c) 2010-2011, Willow Garage, Inc.
+ *
+ * All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ *
+ * * Redistributions of source code must retain the above copyright
+ * notice, this list of conditions and the following disclaimer.
+ * * Redistributions in binary form must reproduce the above
+ * copyright notice, this list of conditions and the following
+ * disclaimer in the documentation and/or other materials provided
+ * with the distribution.
+ * * Neither the name of the copyright holder(s) nor the names of its
+ * contributors may be used to endorse or promote products derived
+ * from this software without specific prior written permission.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS
+ * FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE
+ * COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT,
+ * INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING,
+ * BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
+ * LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
+ * CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
+ * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN
+ * ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+ * POSSIBILITY OF SUCH DAMAGE.
+ *
+ * Modified by Wentao Yuan (wyuan1@cs.cmu.edu) 05/31/2018
+ */
+
+#include
+#include
+#include
+#include
+#include
+#include
+#include
+#include
+#include
+#include
+#include
+#include
+#include
+
+inline double
+uniform_deviate (int seed)
+{
+ double ran = seed * (1.0 / (RAND_MAX + 1.0));
+ return ran;
+}
+
+inline void
+randomPointTriangle (float a1, float a2, float a3, float b1, float b2, float b3, float c1, float c2, float c3,
+ Eigen::Vector4f& p)
+{
+ float r1 = static_cast (uniform_deviate (rand ()));
+ float r2 = static_cast (uniform_deviate (rand ()));
+ float r1sqr = std::sqrt (r1);
+ float OneMinR1Sqr = (1 - r1sqr);
+ float OneMinR2 = (1 - r2);
+ a1 *= OneMinR1Sqr;
+ a2 *= OneMinR1Sqr;
+ a3 *= OneMinR1Sqr;
+ b1 *= OneMinR2;
+ b2 *= OneMinR2;
+ b3 *= OneMinR2;
+ c1 = r1sqr * (r2 * c1 + b1) + a1;
+ c2 = r1sqr * (r2 * c2 + b2) + a2;
+ c3 = r1sqr * (r2 * c3 + b3) + a3;
+ p[0] = c1;
+ p[1] = c2;
+ p[2] = c3;
+ p[3] = 0;
+}
+
+inline void
+randPSurface (vtkPolyData * polydata, std::vector * cumulativeAreas, double totalArea, Eigen::Vector4f& p, bool calcNormal, Eigen::Vector3f& n)
+{
+ float r = static_cast (uniform_deviate (rand ()) * totalArea);
+
+ std::vector::iterator low = std::lower_bound (cumulativeAreas->begin (), cumulativeAreas->end (), r);
+ vtkIdType el = vtkIdType (low - cumulativeAreas->begin ());
+
+ double A[3], B[3], C[3];
+ vtkIdType npts = 0;
+ vtkIdType *ptIds = NULL;
+ polydata->GetCellPoints (el, npts, ptIds);
+ polydata->GetPoint (ptIds[0], A);
+ polydata->GetPoint (ptIds[1], B);
+ polydata->GetPoint (ptIds[2], C);
+ if (calcNormal)
+ {
+ // OBJ: Vertices are stored in a counter-clockwise order by default
+ Eigen::Vector3f v1 = Eigen::Vector3f (A[0], A[1], A[2]) - Eigen::Vector3f (C[0], C[1], C[2]);
+ Eigen::Vector3f v2 = Eigen::Vector3f (B[0], B[1], B[2]) - Eigen::Vector3f (C[0], C[1], C[2]);
+ n = v1.cross (v2);
+ n.normalize ();
+ }
+ randomPointTriangle (float (A[0]), float (A[1]), float (A[2]),
+ float (B[0]), float (B[1]), float (B[2]),
+ float (C[0]), float (C[1]), float (C[2]), p);
+}
+
+void
+uniform_sampling (vtkSmartPointer polydata, size_t n_samples, bool calc_normal, pcl::PointCloud & cloud_out)
+{
+ polydata->BuildCells ();
+ vtkSmartPointer cells = polydata->GetPolys ();
+
+ double p1[3], p2[3], p3[3], totalArea = 0;
+ std::vector cumulativeAreas (cells->GetNumberOfCells (), 0);
+ size_t i = 0;
+ vtkIdType npts = 0, *ptIds = NULL;
+ for (cells->InitTraversal (); cells->GetNextCell (npts, ptIds); i++)
+ {
+ polydata->GetPoint (ptIds[0], p1);
+ polydata->GetPoint (ptIds[1], p2);
+ polydata->GetPoint (ptIds[2], p3);
+ totalArea += vtkTriangle::TriangleArea (p1, p2, p3);
+ cumulativeAreas[i] = totalArea;
+ }
+
+ cloud_out.points.resize (n_samples);
+ cloud_out.width = static_cast (n_samples);
+ cloud_out.height = 1;
+
+ for (i = 0; i < n_samples; i++)
+ {
+ Eigen::Vector4f p;
+ Eigen::Vector3f n;
+ randPSurface (polydata, &cumulativeAreas, totalArea, p, calc_normal, n);
+ cloud_out.points[i].x = p[0];
+ cloud_out.points[i].y = p[1];
+ cloud_out.points[i].z = p[2];
+ if (calc_normal)
+ {
+ cloud_out.points[i].normal_x = n[0];
+ cloud_out.points[i].normal_y = n[1];
+ cloud_out.points[i].normal_z = n[2];
+ }
+ }
+}
+
+using namespace pcl;
+using namespace pcl::io;
+using namespace pcl::console;
+
+const int default_number_samples = 100000;
+const float default_leaf_size = 0.01f;
+
+void
+printHelp (int, char **argv)
+{
+ print_error("Syntax is: %s input.{ply,obj} output.pcd \n", argv[0]);
+ print_info (" where options are:\n");
+ print_info (" -n_samples X = number of samples (default: ");
+ print_value("%d", default_number_samples);
+ print_info (")\n");
+ print_info (
+ " -leaf_size X = the XYZ leaf size for the VoxelGrid -- for data reduction (default: ");
+ print_value("%f", default_leaf_size);
+ print_info (" m)\n");
+ print_info (" -write_normals = flag to write normals to the output pcd\n");
+ print_info (
+ " -no_vis_result = flag to stop visualizing the generated pcd\n");
+ print_info (
+ " -no_vox_filter = flag to stop downsampling the generated pcd\n");
+}
+
+/* ---[ */
+int
+main (int argc, char **argv)
+{
+ if (argc < 3)
+ {
+ printHelp (argc, argv);
+ return (-1);
+ }
+
+ // Parse command line arguments
+ int SAMPLE_POINTS_ = default_number_samples;
+ parse_argument (argc, argv, "-n_samples", SAMPLE_POINTS_);
+ float leaf_size = default_leaf_size;
+ parse_argument (argc, argv, "-leaf_size", leaf_size);
+ bool vis_result = ! find_switch (argc, argv, "-no_vis_result");
+ bool vox_filter = ! find_switch (argc, argv, "-no_vox_filter");
+ const bool write_normals = find_switch (argc, argv, "-write_normals");
+
+ std::vector pcd_file_indices = parse_file_extension_argument (argc, argv, ".pcd");
+ std::vector ply_file_indices = parse_file_extension_argument (argc, argv, ".ply");
+ std::vector obj_file_indices = parse_file_extension_argument (argc, argv, ".obj");
+ if (pcd_file_indices.size () != 1)
+ {
+ print_error ("Need a single output PCD file to continue.\n");
+ return (-1);
+ }
+ if (ply_file_indices.size () != 1 && obj_file_indices.size () != 1)
+ {
+ print_error ("Need a single input PLY/OBJ file to continue.\n");
+ return (-1);
+ }
+
+ vtkSmartPointer polydata1 = vtkSmartPointer::New ();
+ if (ply_file_indices.size () == 1)
+ {
+ pcl::PolygonMesh mesh;
+ pcl::io::loadPolygonFilePLY (argv[ply_file_indices[0]], mesh);
+ pcl::io::mesh2vtk (mesh, polydata1);
+ }
+ else if (obj_file_indices.size () == 1)
+ {
+ print_info ("Convert %s to a point cloud using uniform sampling.\n", argv[obj_file_indices[0]]);
+ vtkSmartPointer readerQuery = vtkSmartPointer::New ();
+ readerQuery->SetFileName (argv[obj_file_indices[0]]);
+ readerQuery->Update ();
+ polydata1 = readerQuery->GetOutput ();
+ }
+
+ //make sure that the polygons are triangles!
+ vtkSmartPointer triangleFilter = vtkSmartPointer::New ();
+#if VTK_MAJOR_VERSION < 6
+ triangleFilter->SetInput (polydata1);
+#else
+ triangleFilter->SetInputData (polydata1);
+#endif
+ triangleFilter->Update ();
+
+ vtkSmartPointer triangleMapper = vtkSmartPointer::New ();
+ triangleMapper->SetInputConnection (triangleFilter->GetOutputPort ());
+ triangleMapper->Update ();
+ polydata1 = triangleMapper->GetInput ();
+
+ bool INTER_VIS = false;
+
+ if (INTER_VIS)
+ {
+ visualization::PCLVisualizer vis;
+ vis.addModelFromPolyData (polydata1, "mesh1", 0);
+ vis.setRepresentationToSurfaceForAllActors ();
+ vis.spin ();
+ }
+
+ pcl::PointCloud::Ptr cloud_1 (new pcl::PointCloud);
+ uniform_sampling (polydata1, SAMPLE_POINTS_, write_normals, *cloud_1);
+
+ if (INTER_VIS)
+ {
+ visualization::PCLVisualizer vis_sampled;
+ vis_sampled.addPointCloud (cloud_1);
+ if (write_normals)
+ vis_sampled.addPointCloudNormals (cloud_1, 1, 0.02f, "cloud_normals");
+ vis_sampled.spin ();
+ }
+
+ pcl::PointCloud::Ptr cloud (new pcl::PointCloud);
+
+ // Voxelgrid
+ if (vox_filter)
+ {
+ VoxelGrid grid_;
+ grid_.setInputCloud (cloud_1);
+ grid_.setLeafSize (leaf_size, leaf_size, leaf_size);
+ grid_.filter (*cloud);
+ }
+ else
+ {
+ *cloud = *cloud_1;
+ }
+
+ if (vis_result)
+ {
+ visualization::PCLVisualizer vis3 ("VOXELIZED SAMPLES CLOUD");
+ vis3.addPointCloud (cloud);
+ if (write_normals)
+ vis3.addPointCloudNormals (cloud, 1, 0.02f, "cloud_normals");
+ vis3.spin ();
+ }
+
+ if (!write_normals)
+ {
+ pcl::PointCloud::Ptr cloud_xyz (new pcl::PointCloud);
+ // Strip uninitialized normals from cloud:
+ pcl::copyPointCloud (*cloud, *cloud_xyz);
+ savePCDFileASCII (argv[pcd_file_indices[0]], *cloud_xyz);
+ }
+ else
+ {
+ savePCDFileASCII (argv[pcd_file_indices[0]], *cloud);
+ }
+}
diff --git a/zoo/OcCo/sample/readme.md b/zoo/OcCo/sample/readme.md
new file mode 100644
index 0000000..45bd38f
--- /dev/null
+++ b/zoo/OcCo/sample/readme.md
@@ -0,0 +1,5 @@
+[Optional] This directory contains code for a command line tool that uniformly samples a point cloud on a mesh. It is a modified version of `pcl_mesh_sampling`. To use it:
+1. Install [CMake](https://cmake.org/download/), [PCL](http://pointclouds.org/downloads/) and [VTK](https://vtk.org/download/).
+2. Make a build directory: `makedir build & cd build`.
+3. Build the code by running `cmake ..` and then `make`.
+4. Run `./mesh_sampling` to see the command line usage.
\ No newline at end of file
diff --git a/zoo/PAConv/.gitignore b/zoo/PAConv/.gitignore
new file mode 100644
index 0000000..a103063
--- /dev/null
+++ b/zoo/PAConv/.gitignore
@@ -0,0 +1,145 @@
+# Byte-compiled / optimized / DLL files
+__pycache__/
+*.py[cod]
+*$py.class
+
+
+.idea/workspace.xml
+part_seg/.DS_Store
+obj_cls/.DS_Store
+.idea/PAConv.iml
+.idea/misc.xml
+.DS_Store
+
+# C extensions
+*.so
+
+# Distribution / packaging
+.Python
+build/
+develop-eggs/
+dist/
+downloads/
+eggs/
+.eggs/
+lib64/
+parts/
+sdist/
+var/
+wheels/
+pip-wheel-metadata/
+share/python-wheels/
+*.egg-info/
+.installed.cfg
+*.egg
+MANIFEST
+
+# PyInstaller
+# Usually these files are written by a python script from a template
+# before PyInstaller builds the exe, so as to inject date/other infos into it.
+*.manifest
+*.spec
+
+# Installer logs
+pip-log.txt
+pip-delete-this-directory.txt
+
+# Unit test / coverage reports
+htmlcov/
+.tox/
+.nox/
+.coverage
+.coverage.*
+.cache
+nosetests.xml
+coverage.xml
+*.cover
+*.py,cover
+.hypothesis/
+.pytest_cache/
+
+# Translations
+*.mo
+*.pot
+
+# Django stuff:
+*.log
+local_settings.py
+db.sqlite3
+db.sqlite3-journal
+
+# Flask stuff:
+instance/
+.webassets-cache
+
+# Scrapy stuff:
+.scrapy
+
+# Sphinx documentation
+docs/_build/
+
+# PyBuilder
+target/
+
+# Jupyter Notebook
+.ipynb_checkpoints
+
+# IPython
+profile_default/
+ipython_config.py
+
+# pyenv
+.python-version
+
+# pipenv
+# According to pypa/pipenv#598, it is recommended to include Pipfile.lock in version control.
+# However, in case of collaboration, if having platform-specific dependencies or dependencies
+# having no cross-platform support, pipenv may install dependencies that don't work, or not
+# install all needed dependencies.
+#Pipfile.lock
+
+# PEP 582; used by e.g. github.com/David-OConnor/pyflow
+__pypackages__/
+
+# Celery stuff
+celerybeat-schedule
+celerybeat.pid
+
+# SageMath parsed files
+*.sage.py
+
+# Environments
+.env
+.venv
+env/
+venv/
+ENV/
+env.bak/
+venv.bak/
+
+# Spyder project settings
+.spyderproject
+.spyproject
+
+# Rope project settings
+.ropeproject
+
+# mkdocs documentation
+/site
+
+# mypy
+.mypy_cache/
+.dmypy.json
+dmypy.json
+
+# Pyre type checker
+.pyre/
+
+.idea/
+exp/
+kernels/
+lib/**/*.o
+lib/**/*.ninja*
+dataset/
+*.DS_Store
+.vscode/
diff --git a/zoo/PAConv/LICENSE b/zoo/PAConv/LICENSE
new file mode 100644
index 0000000..261eeb9
--- /dev/null
+++ b/zoo/PAConv/LICENSE
@@ -0,0 +1,201 @@
+ Apache License
+ Version 2.0, January 2004
+ http://www.apache.org/licenses/
+
+ TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
+
+ 1. Definitions.
+
+ "License" shall mean the terms and conditions for use, reproduction,
+ and distribution as defined by Sections 1 through 9 of this document.
+
+ "Licensor" shall mean the copyright owner or entity authorized by
+ the copyright owner that is granting the License.
+
+ "Legal Entity" shall mean the union of the acting entity and all
+ other entities that control, are controlled by, or are under common
+ control with that entity. For the purposes of this definition,
+ "control" means (i) the power, direct or indirect, to cause the
+ direction or management of such entity, whether by contract or
+ otherwise, or (ii) ownership of fifty percent (50%) or more of the
+ outstanding shares, or (iii) beneficial ownership of such entity.
+
+ "You" (or "Your") shall mean an individual or Legal Entity
+ exercising permissions granted by this License.
+
+ "Source" form shall mean the preferred form for making modifications,
+ including but not limited to software source code, documentation
+ source, and configuration files.
+
+ "Object" form shall mean any form resulting from mechanical
+ transformation or translation of a Source form, including but
+ not limited to compiled object code, generated documentation,
+ and conversions to other media types.
+
+ "Work" shall mean the work of authorship, whether in Source or
+ Object form, made available under the License, as indicated by a
+ copyright notice that is included in or attached to the work
+ (an example is provided in the Appendix below).
+
+ "Derivative Works" shall mean any work, whether in Source or Object
+ form, that is based on (or derived from) the Work and for which the
+ editorial revisions, annotations, elaborations, or other modifications
+ represent, as a whole, an original work of authorship. For the purposes
+ of this License, Derivative Works shall not include works that remain
+ separable from, or merely link (or bind by name) to the interfaces of,
+ the Work and Derivative Works thereof.
+
+ "Contribution" shall mean any work of authorship, including
+ the original version of the Work and any modifications or additions
+ to that Work or Derivative Works thereof, that is intentionally
+ submitted to Licensor for inclusion in the Work by the copyright owner
+ or by an individual or Legal Entity authorized to submit on behalf of
+ the copyright owner. For the purposes of this definition, "submitted"
+ means any form of electronic, verbal, or written communication sent
+ to the Licensor or its representatives, including but not limited to
+ communication on electronic mailing lists, source code control systems,
+ and issue tracking systems that are managed by, or on behalf of, the
+ Licensor for the purpose of discussing and improving the Work, but
+ excluding communication that is conspicuously marked or otherwise
+ designated in writing by the copyright owner as "Not a Contribution."
+
+ "Contributor" shall mean Licensor and any individual or Legal Entity
+ on behalf of whom a Contribution has been received by Licensor and
+ subsequently incorporated within the Work.
+
+ 2. Grant of Copyright License. Subject to the terms and conditions of
+ this License, each Contributor hereby grants to You a perpetual,
+ worldwide, non-exclusive, no-charge, royalty-free, irrevocable
+ copyright license to reproduce, prepare Derivative Works of,
+ publicly display, publicly perform, sublicense, and distribute the
+ Work and such Derivative Works in Source or Object form.
+
+ 3. Grant of Patent License. Subject to the terms and conditions of
+ this License, each Contributor hereby grants to You a perpetual,
+ worldwide, non-exclusive, no-charge, royalty-free, irrevocable
+ (except as stated in this section) patent license to make, have made,
+ use, offer to sell, sell, import, and otherwise transfer the Work,
+ where such license applies only to those patent claims licensable
+ by such Contributor that are necessarily infringed by their
+ Contribution(s) alone or by combination of their Contribution(s)
+ with the Work to which such Contribution(s) was submitted. If You
+ institute patent litigation against any entity (including a
+ cross-claim or counterclaim in a lawsuit) alleging that the Work
+ or a Contribution incorporated within the Work constitutes direct
+ or contributory patent infringement, then any patent licenses
+ granted to You under this License for that Work shall terminate
+ as of the date such litigation is filed.
+
+ 4. Redistribution. You may reproduce and distribute copies of the
+ Work or Derivative Works thereof in any medium, with or without
+ modifications, and in Source or Object form, provided that You
+ meet the following conditions:
+
+ (a) You must give any other recipients of the Work or
+ Derivative Works a copy of this License; and
+
+ (b) You must cause any modified files to carry prominent notices
+ stating that You changed the files; and
+
+ (c) You must retain, in the Source form of any Derivative Works
+ that You distribute, all copyright, patent, trademark, and
+ attribution notices from the Source form of the Work,
+ excluding those notices that do not pertain to any part of
+ the Derivative Works; and
+
+ (d) If the Work includes a "NOTICE" text file as part of its
+ distribution, then any Derivative Works that You distribute must
+ include a readable copy of the attribution notices contained
+ within such NOTICE file, excluding those notices that do not
+ pertain to any part of the Derivative Works, in at least one
+ of the following places: within a NOTICE text file distributed
+ as part of the Derivative Works; within the Source form or
+ documentation, if provided along with the Derivative Works; or,
+ within a display generated by the Derivative Works, if and
+ wherever such third-party notices normally appear. The contents
+ of the NOTICE file are for informational purposes only and
+ do not modify the License. You may add Your own attribution
+ notices within Derivative Works that You distribute, alongside
+ or as an addendum to the NOTICE text from the Work, provided
+ that such additional attribution notices cannot be construed
+ as modifying the License.
+
+ You may add Your own copyright statement to Your modifications and
+ may provide additional or different license terms and conditions
+ for use, reproduction, or distribution of Your modifications, or
+ for any such Derivative Works as a whole, provided Your use,
+ reproduction, and distribution of the Work otherwise complies with
+ the conditions stated in this License.
+
+ 5. Submission of Contributions. Unless You explicitly state otherwise,
+ any Contribution intentionally submitted for inclusion in the Work
+ by You to the Licensor shall be under the terms and conditions of
+ this License, without any additional terms or conditions.
+ Notwithstanding the above, nothing herein shall supersede or modify
+ the terms of any separate license agreement you may have executed
+ with Licensor regarding such Contributions.
+
+ 6. Trademarks. This License does not grant permission to use the trade
+ names, trademarks, service marks, or product names of the Licensor,
+ except as required for reasonable and customary use in describing the
+ origin of the Work and reproducing the content of the NOTICE file.
+
+ 7. Disclaimer of Warranty. Unless required by applicable law or
+ agreed to in writing, Licensor provides the Work (and each
+ Contributor provides its Contributions) on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
+ implied, including, without limitation, any warranties or conditions
+ of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
+ PARTICULAR PURPOSE. You are solely responsible for determining the
+ appropriateness of using or redistributing the Work and assume any
+ risks associated with Your exercise of permissions under this License.
+
+ 8. Limitation of Liability. In no event and under no legal theory,
+ whether in tort (including negligence), contract, or otherwise,
+ unless required by applicable law (such as deliberate and grossly
+ negligent acts) or agreed to in writing, shall any Contributor be
+ liable to You for damages, including any direct, indirect, special,
+ incidental, or consequential damages of any character arising as a
+ result of this License or out of the use or inability to use the
+ Work (including but not limited to damages for loss of goodwill,
+ work stoppage, computer failure or malfunction, or any and all
+ other commercial damages or losses), even if such Contributor
+ has been advised of the possibility of such damages.
+
+ 9. Accepting Warranty or Additional Liability. While redistributing
+ the Work or Derivative Works thereof, You may choose to offer,
+ and charge a fee for, acceptance of support, warranty, indemnity,
+ or other liability obligations and/or rights consistent with this
+ License. However, in accepting such obligations, You may act only
+ on Your own behalf and on Your sole responsibility, not on behalf
+ of any other Contributor, and only if You agree to indemnify,
+ defend, and hold each Contributor harmless for any liability
+ incurred by, or claims asserted against, such Contributor by reason
+ of your accepting any such warranty or additional liability.
+
+ END OF TERMS AND CONDITIONS
+
+ APPENDIX: How to apply the Apache License to your work.
+
+ To apply the Apache License to your work, attach the following
+ boilerplate notice, with the fields enclosed by brackets "[]"
+ replaced with your own identifying information. (Don't include
+ the brackets!) The text should be enclosed in the appropriate
+ comment syntax for the file format. We also recommend that a
+ file or class name and description of purpose be included on the
+ same "printed page" as the copyright notice for easier
+ identification within third-party archives.
+
+ Copyright [yyyy] [name of copyright owner]
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
diff --git a/zoo/PAConv/README.md b/zoo/PAConv/README.md
new file mode 100644
index 0000000..36e7e58
--- /dev/null
+++ b/zoo/PAConv/README.md
@@ -0,0 +1,74 @@
+# PAConv: Position Adaptive Convolution with Dynamic Kernel Assembling on Point Clouds
+
+
+by [Mutian Xu*](https://mutianxu.github.io/), [Runyu Ding*](), [Hengshuang Zhao](https://hszhao.github.io/), and [Xiaojuan Qi](https://xjqi.github.io/).
+
+
+## Introduction
+This repository is built for the official implementation of:
+
+__PAConv__: Position Adaptive Convolution with Dynamic Kernel Assembling on Point Clouds ___(CVPR2021)___ [[arXiv](https://arxiv.org/abs/2103.14635)]
+
+
+If you find our work useful in your research, please consider citing:
+
+```
+@inproceedings{xu2021paconv,
+ title={PAConv: Position Adaptive Convolution with Dynamic Kernel Assembling on Point Clouds},
+ author={Xu, Mutian and Ding, Runyu and Zhao, Hengshuang and Qi, Xiaojuan},
+ booktitle={CVPR},
+ year={2021}
+}
+```
+
+## Highlight
+
+* All initialization models and trained models are available.
+* Provide fast multiprocessing training ([nn.parallel.DistributedDataParallel](https://pytorch.org/docs/stable/_modules/torch/nn/parallel/distributed.html)) with official [nn.SyncBatchNorm](https://pytorch.org/docs/master/nn.html#torch.nn.SyncBatchNorm).
+* Incorporated with [tensorboardX](https://github.com/lanpa/tensorboardX) for better visualization of the whole training process.
+* Support recent versions of PyTorch.
+* Well designed code structures for easy reading and using.
+
+## Usage
+
+We provide scripts for different point cloud processing tasks:
+
+* [Object Classification](./obj_cls) task on Modelnet40.
+
+* [Shape Part Segmentation](./part_seg) task on ShapeNetPart.
+
+* [Indoor Scene Segmentation](./scene_seg) task on S3DIS.
+
+You can find the instructions for running these tasks in the above corresponding folders.
+
+## Performance
+The following tables report the current performances on different tasks and datasets. ( __*__ denotes the backbone architectures)
+
+### Object Classification on ModelNet40
+
+| Method | OA |
+| :--- | :---: |
+| PAConv _(*PointNet)_ | 93.2%|
+| PAConv _(*DGCNN)_ | **93.9%** |
+
+### Shape Part Segmentation on ShapeNet Part
+| Method | Class mIoU | Instance mIoU |
+| :--- | :---: | :---: |
+| PAConv _(*DGCNN)_ | **84.6%** | **86.1%** |
+
+
+
+### Indoor Scene Segmentation on S3DIS Area-5
+
+| Method | S3DIS mIoU |
+| :--- | :---: |
+| PAConv _(*PointNet++)_| **66.58%** |
+
+
+## Contact
+
+You are welcome to send pull requests or share some ideas with us. Contact information: Mutian Xu (mino1018@outlook.com) or Runyu Ding (ryding@eee.hku.hk).
+
+## Acknowledgement
+
+Our code base is partially borrowed from [PointWeb](https://github.com/hszhao/PointWeb), [DGCNN](https://github.com/WangYueFt/dgcnn) and [PointNet++](https://github.com/charlesq34/pointnet2).
\ No newline at end of file
diff --git a/zoo/PAConv/figure/paconv.jpg b/zoo/PAConv/figure/paconv.jpg
new file mode 100644
index 0000000..0e31dbb
Binary files /dev/null and b/zoo/PAConv/figure/paconv.jpg differ
diff --git a/zoo/PAConv/figure/partseg_vis.jpg b/zoo/PAConv/figure/partseg_vis.jpg
new file mode 100644
index 0000000..b45e004
Binary files /dev/null and b/zoo/PAConv/figure/partseg_vis.jpg differ
diff --git a/zoo/PAConv/figure/semseg_vis.jpg b/zoo/PAConv/figure/semseg_vis.jpg
new file mode 100644
index 0000000..5493302
Binary files /dev/null and b/zoo/PAConv/figure/semseg_vis.jpg differ
diff --git a/zoo/PAConv/obj_cls/README.md b/zoo/PAConv/obj_cls/README.md
new file mode 100644
index 0000000..fe6b5f9
--- /dev/null
+++ b/zoo/PAConv/obj_cls/README.md
@@ -0,0 +1,86 @@
+3D Object Classification
+============================
+
+## Installation
+
+### Requirements
+* Hardware: GPU to hold 6000M. (Better with two gpus or higher-level gpu to satisfy the need of paralleled cuda_kernels.)
+* Software:
+ Linux (tested on Ubuntu 18.04)
+ PyTorch>=1.5.0, Python>=3, CUDA>=10.1, tensorboardX, h5py, pyYaml, scikit-learn
+
+
+### Dataset
+Download and unzip [ModelNet40](https://shapenet.cs.stanford.edu/media/modelnet40_ply_hdf5_2048.zip) (415M). Then symlink the paths to it as follows (you can alternatively modify the path [here](https://github.com/CVMI-Lab/PAConv/blob/main/obj_cls/util/data_util.py#L10)):
+```
+mkdir -p data
+ln -s /path to modelnet40/modelnet40_ply_hdf5_2048 data
+```
+
+## Usage
+
+* Build the CUDA kernel:
+
+ When you run the program for the first time, please wait a few moments for compiling the [cuda_lib](./cuda_lib) **automatically**.
+ Once the CUDA kernel is built, the program will skip this in the future running.
+
+
+* Train:
+
+ * Multi-thread training ([nn.DataParallel](https://pytorch.org/docs/stable/nn.html#dataparallel)) :
+
+ * `python main.py --config config/dgcnn_paconv_train.yaml` (Embed PAConv into [DGCNN](https://arxiv.org/abs/1801.07829))
+
+ * `python main.py --config config/pointnet_paconv_train.yaml` (Embed PAConv into [PointNet](https://arxiv.org/abs/1612.00593))
+
+ * We also provide a fast **multi-process training** ([nn.parallel.DistributedDataParallel](https://pytorch.org/docs/stable/_modules/torch/nn/parallel/distributed.html), **recommended**) with official [nn.SyncBatchNorm](https://pytorch.org/docs/master/nn.html#torch.nn.SyncBatchNorm). Please also remind to specify the GPU ID:
+
+ * `CUDA_VISIBLE_DEVICES=x,x python main_ddp.py --config config/dgcnn_paconv_train.yaml` (Embed PAConv into [DGCNN](https://arxiv.org/abs/1801.07829))
+ * `CUDA_VISIBLE_DEVICES=x,x python main_ddp.py --config config/pointnet_paconv_train.yaml` (Embed PAConv into [PointNet](https://arxiv.org/abs/1612.00593))
+
+
+* Test:
+
+ * Download our [pretrained model](https://drive.google.com/drive/folders/1eDBpIRt4iSCjEw2-Mk2G3gz7YwA6VfEB?usp=sharing) and put it under the [obj_cls](/obj_cls) folder.
+
+ * Run the voting evaluation script to test our pretrained model, after this voting you will get an accuracy of 93.9% if all things go right:
+
+ `python eval_voting.py --config config/dgcnn_paconv_test.yaml`
+
+ * You can also directly test our pretrained model without voting to get an accuracy of 93.6%:
+
+ `python main.py --config config/dgcnn_paconv_test.yaml`
+
+ * For full test after training the model:
+ * Specify the `eval` to `True` in your config file.
+
+ * Make sure to use **[main.py](main.py)** (main_ddp.py may lead to wrong result due to the repeating problem of all_reduce function in multi-process training) :
+
+ `python main.py --config config/your config file.yaml`
+
+* Visualization: [tensorboardX](https://github.com/lanpa/tensorboardX) incorporated for better visualization.
+
+ `tensorboard --logdir=checkpoints/exp_name`
+
+
+## Citation
+If you find the code or trained models useful, please consider citing:
+```
+@inproceedings{xu2021paconv,
+ title={PAConv: Position Adaptive Convolution with Dynamic Kernel Assembling on Point Clouds},
+ author={Xu, Mutian and Ding, Runyu and Zhao, Hengshuang and Qi, Xiaojuan},
+ booktitle={CVPR},
+ year={2021}
+}
+```
+
+## Contact
+
+You are welcome to send pull requests or share some ideas with us. Contact information: Mutian Xu (mino1018@outlook.com) or Runyu Ding (ryding@eee.hku.hk).
+
+
+## Acknowledgement
+This code is is partially borrowed from [DGCNN](https://github.com/WangYueFt/dgcnn).
+
+
+
diff --git a/zoo/PAConv/obj_cls/config/dgcnn_paconv_test.yaml b/zoo/PAConv/obj_cls/config/dgcnn_paconv_test.yaml
new file mode 100644
index 0000000..a9f1a96
--- /dev/null
+++ b/zoo/PAConv/obj_cls/config/dgcnn_paconv_test.yaml
@@ -0,0 +1,14 @@
+MODEL:
+ arch: dgcnn # backbone network architecture
+ num_matrices: [8, 8, 8, 8]
+ k_neighbors: 20 # number of knn
+ calc_scores: softmax
+
+
+TEST:
+ exp_name: dgcnn_paconv_test
+ num_points: 1024
+ test_batch_size: 16
+ eval: True
+ dropout: 0.5
+ no_cuda: False
\ No newline at end of file
diff --git a/zoo/PAConv/obj_cls/config/dgcnn_paconv_train.yaml b/zoo/PAConv/obj_cls/config/dgcnn_paconv_train.yaml
new file mode 100644
index 0000000..a2ef30e
--- /dev/null
+++ b/zoo/PAConv/obj_cls/config/dgcnn_paconv_train.yaml
@@ -0,0 +1,19 @@
+MODEL:
+ arch: dgcnn # backbone network architecture
+ num_matrices: [8, 8, 8, 8]
+ k_neighbors: 20 # number of knn
+ calc_scores: softmax
+
+
+TRAIN:
+ exp_name: dgcnn_paconv_train
+ num_points: 1024
+ pt_norm: False # input normalization
+ batch_size: 32
+ test_batch_size: 16
+ epochs: 350
+ lr: 0.1
+ momentum: 0.9
+ eval: False
+ dropout: 0.5
+ no_cuda: False
\ No newline at end of file
diff --git a/zoo/PAConv/obj_cls/config/pointnet_paconv_train.yaml b/zoo/PAConv/obj_cls/config/pointnet_paconv_train.yaml
new file mode 100644
index 0000000..33fe15c
--- /dev/null
+++ b/zoo/PAConv/obj_cls/config/pointnet_paconv_train.yaml
@@ -0,0 +1,19 @@
+MODEL:
+ arch: pointnet # backbone network architecture
+ num_matrices: [8, 8, 8]
+ k_neighbors: 30 # number of knn
+ calc_scores: softmax
+
+
+TRAIN:
+ num_points: 1024
+ pt_norm: False # input normalization
+ exp_name: pointnet_paconv_train
+ batch_size: 32
+ test_batch_size: 16
+ epochs: 350
+ lr: 0.1
+ momentum: 0.9
+ eval: False
+ dropout: 0.5
+ no_cuda: False
\ No newline at end of file
diff --git a/zoo/PAConv/obj_cls/cuda_lib/__init__.py b/zoo/PAConv/obj_cls/cuda_lib/__init__.py
new file mode 100644
index 0000000..e69de29
diff --git a/zoo/PAConv/obj_cls/cuda_lib/functional.py b/zoo/PAConv/obj_cls/cuda_lib/functional.py
new file mode 100644
index 0000000..d25ae7e
--- /dev/null
+++ b/zoo/PAConv/obj_cls/cuda_lib/functional.py
@@ -0,0 +1,11 @@
+from . import functions
+
+
+def assign_score_withk_halfkernel(score, point_input, knn_idx, aggregate='sum'):
+ return functions.assign_score_withk_halfkernel(score, point_input, knn_idx, aggregate)
+
+
+def assign_score_withk(score, point_input, center_input, knn_idx, aggregate='sum'):
+ return functions.assign_score_withk(score, point_input, center_input, knn_idx, aggregate)
+
+
diff --git a/zoo/PAConv/obj_cls/cuda_lib/functions/__init__.py b/zoo/PAConv/obj_cls/cuda_lib/functions/__init__.py
new file mode 100644
index 0000000..df9d06f
--- /dev/null
+++ b/zoo/PAConv/obj_cls/cuda_lib/functions/__init__.py
@@ -0,0 +1 @@
+from .assignscore import *
\ No newline at end of file
diff --git a/zoo/PAConv/obj_cls/cuda_lib/functions/assignscore.py b/zoo/PAConv/obj_cls/cuda_lib/functions/assignscore.py
new file mode 100644
index 0000000..9116005
--- /dev/null
+++ b/zoo/PAConv/obj_cls/cuda_lib/functions/assignscore.py
@@ -0,0 +1,125 @@
+import torch
+from torch.autograd import Function
+
+from .. import src
+
+
+class AssignScoreWithKHalfKernel(Function):
+ @staticmethod
+ def forward(ctx, scores, points, knn_idx, aggregate) : # -> torch.Tensor:
+ """
+ :param ctx
+ :param scores: (B, N, K, M)
+ :param points: (B, N, M, O)
+ :param knn_idx: (B, N, K)
+ :param aggregate:
+ :return: output: (B, O, N)
+ """
+
+ agg = {'sum': 0, 'avg': 1, 'max': 2}
+
+ B, N, M, O = points.size()
+ K = scores.size(2)
+
+ output = torch.zeros([B, O, N], dtype=points.dtype, device=points.device)
+ output = output.contiguous()
+
+ src.gpu.assign_score_withk_halfkernel_forward_cuda(B, N, M, K, O, agg[aggregate],
+ points.contiguous(), scores.contiguous(),
+ knn_idx.contiguous(), output)
+
+ ctx.save_for_backward(output, points, scores, knn_idx)
+ ctx.agg = agg[aggregate]
+
+ return output
+
+ @staticmethod
+ def backward(ctx, grad_out):
+ """
+
+ :param ctx:
+ :param grad_out: (B, O, N) tensor with gradients of ouputs
+ :return: grad_scores: (B, N, K, M) tensor with gradients of scores
+ :return: grad_points: (B, N, M, O) tensor with gradients of point features
+ """
+ output, points, scores, knn_idx = ctx.saved_tensors
+
+ agg = ctx.agg
+
+ B, N, M, O = points.size()
+ K = scores.size(2)
+
+ grad_points = torch.zeros_like(points, dtype=points.dtype, device=points.device).contiguous()
+ grad_scores = torch.zeros_like(scores, dtype=scores.dtype, device=scores.device).contiguous()
+
+ src.gpu.assign_score_withk_halfkernel_backward_cuda(B, N, M, K, O, agg, grad_out.contiguous(),
+ points.contiguous(), scores.contiguous(), knn_idx.contiguous(),
+ grad_points, grad_scores)
+
+ return grad_scores, grad_points, None, None
+
+
+assign_score_withk_halfkernel = AssignScoreWithKHalfKernel.apply
+
+
+class AssignScoreWithK(Function):
+ @staticmethod
+ def forward(ctx, scores, points, centers, knn_idx, aggregate): # -> torch.Tensor:
+ """
+ :param ctx
+ :param scores: (B, N, K, M)
+ :param points: (B, N, M, O)
+ :param centers: (B, N, M, O)
+ :param knn_idx: (B, N, K)
+ :param aggregate:
+ :return: output: (B, O, N)
+ """
+
+ agg = {'sum': 0, 'avg': 1, 'max': 2}
+
+ B, N, M, O = points.size()
+ K = scores.size(2)
+
+ output = torch.zeros([B, O, N], dtype=points.dtype, device=points.device)
+ output = output.contiguous()
+
+ src.gpu.assign_score_withk_forward_cuda(B, N, M, K, O, agg[aggregate],
+ points.contiguous(), centers.contiguous(),
+ scores.contiguous(), knn_idx.contiguous(),
+ output)
+
+ ctx.save_for_backward(output, points, centers, scores, knn_idx)
+ ctx.agg = agg[aggregate]
+
+ return output
+
+ @staticmethod
+ def backward(ctx, grad_out):
+ """
+
+ :param ctx:
+ :param grad_out: (B, O, N) tensor with gradients of ouputs
+ :return: grad_scores: (B, N, K, M) tensor with gradients of scores
+ :return: grad_points: (B, N, M, O) tensor with gradients of point features
+ :return: grad_centers: (B, N, M, O) tensor with gradients of center point features
+ """
+ output, points, centers, scores, knn_idx = ctx.saved_tensors
+
+ agg = ctx.agg
+
+ B, N, M, O = points.size()
+ K = scores.size(2)
+
+ grad_points = torch.zeros_like(points, dtype=points.dtype, device=points.device).contiguous()
+ grad_centers = torch.zeros_like(centers, dtype=points.dtype, device=points.device).contiguous()
+ grad_scores = torch.zeros_like(scores, dtype=scores.dtype, device=scores.device).contiguous()
+
+ src.gpu.assign_score_withk_backward_cuda(B, N, M, K, O, agg, grad_out.contiguous(),
+ points.contiguous(), centers.contiguous(),
+ scores.contiguous(), knn_idx.contiguous(),
+ grad_points, grad_centers, grad_scores)
+
+ return grad_scores, grad_points, grad_centers, None, None, None
+
+
+assign_score_withk = AssignScoreWithK.apply
diff --git a/zoo/PAConv/obj_cls/cuda_lib/src/__init__.py b/zoo/PAConv/obj_cls/cuda_lib/src/__init__.py
new file mode 100644
index 0000000..1223d55
--- /dev/null
+++ b/zoo/PAConv/obj_cls/cuda_lib/src/__init__.py
@@ -0,0 +1,15 @@
+import os
+import torch
+from torch.utils.cpp_extension import load
+
+cwd = os.path.dirname(os.path.realpath(__file__))
+gpu_path = os.path.join(cwd, 'gpu')
+
+if torch.cuda.is_available():
+ gpu = load('gpconv_cuda', [
+ os.path.join(gpu_path, 'operator.cpp'),
+ os.path.join(gpu_path, 'assign_score_withk_gpu.cu'),
+ os.path.join(gpu_path, 'assign_score_withk_halfkernel_gpu.cu'),
+ ], build_directory=gpu_path, verbose=False)
+
+
diff --git a/zoo/PAConv/obj_cls/cuda_lib/src/gpu/assign_score_withk_gpu.cu b/zoo/PAConv/obj_cls/cuda_lib/src/gpu/assign_score_withk_gpu.cu
new file mode 100644
index 0000000..d4ed602
--- /dev/null
+++ b/zoo/PAConv/obj_cls/cuda_lib/src/gpu/assign_score_withk_gpu.cu
@@ -0,0 +1,220 @@
+#include
+#include
+#include
+#include
+#include "cuda_utils.h"
+#include "utils.h"
+
+
+// input: points(B,N,M,O), centers(B,N,M,O), scores(B,N,K,M), idx(B,N,K)
+// ouput: fout(B,O,N)
+// algo: fout(b,i,k,j) = s(b,i,k,m)*p(b,i,k,m,j) = s(b,i,k,m)*p(b,i(k),m,j)
+// i(k) = idx(b,i,k)
+// sum: fout(b,i,j) = fout(b,i,j) + s(b,i,k,m)*p(b,i,k,m,j)
+// avg: fout(b,i,j) = sum(fout(b,i,k,j)) / k
+// max: fout(b,i,j) = max(fout(b,i,k,j), sum(s(b,i,k,m)*p(b,i,k,m,j)))
+// k,m : sequential
+// b,n: parallel
+
+const int SUM = 0;
+const int AVG = 1;
+const int MAX = 2;
+
+#ifndef _CLOCK_T_DEFINED
+typedef long clock_t;
+#define _CLOCK_T_DEFINED
+#endif
+
+__global__ void assign_score_withk_forward_kernel(const int nthreads, const int B, const int N, const int M,
+ const int K, const int O, const int aggregate,
+ const float* points,
+ const float* centers,
+ const float* scores,
+ const long* knn_idx,
+ float* output) {
+
+ // clock_t start, finish;
+ // start = clock();
+
+ // ----- parallel loop for B, N and O ---------
+ for (long i = blockIdx.x * blockDim.x + threadIdx.x; i < nthreads; i += blockDim.x * gridDim.x) {
+ // ----- loop for K ---------
+ for (int k = 0; k < K; k++) {
+ // ------- loop for M ----------
+ for (int m = 0; m < M; m++) {
+ int b = (int)(i / (O * N));
+ int n = (int)(i % (O * N) / O);
+ // int k = (int)(i % (O * K * M) / (O * M));
+ // int m = (int)(i % (O * M) / O);
+ int o = (int)(i % O);
+ int kn = (int) knn_idx[b*K*N + n*K + k];
+ assert (b < B);
+ assert (kn < N);
+ assert (o < O);
+ assert (n < N);
+
+ if (aggregate == SUM) {
+ // feature concat
+ // output[b*N*O + o*N + n] += 2 * points[b*N*M*O + kn*M*O + m*O + o] * scores[b*N*K*M + n*K*M + k*M + m];
+ // output[b*N*O + o*N + n] -= points[b*N*M*O + n*M*O + m*O + o] * scores[b*N*K*M + n*K*M + k*M + m];
+ atomicAdd(output + b*N*O + o*N + n,
+ points[b*N*M*O + kn*M*O + m*O + o] * scores[b*N*K*M + n*K*M + k*M + m]
+ - centers[b*N*M*O + n*M*O + m*O + o] * scores[b*N*K*M + n*K*M + k*M + m]);
+ }
+ else if (aggregate == AVG) {
+ output[o*N + n] += 2 * points[kn*M*O + m*O + o] * scores[n*K*M + k*M + m] / K;
+ output[o*N + n] -= points[n*M*O + m*O + o] * scores[n*K*M + k*M + m] / K;
+ }
+ else if (aggregate == MAX) {
+ /***
+ float tmp = points[i*K*M + k*M + m] * scores[((int)(i/O))*K*M + k*M + m];
+ output[i] = tmp > output[i] ? tmp: output[i];
+ ***/
+ }
+ }
+ }
+ }
+
+ // finish = clock();
+ // printf("assign socre forward timeοΌblockid %d, %f\n", batch_idx, (double)(finish - start)/10000.0);
+}
+
+
+__global__ void assign_score_withk_backward_points_kernel(const int nthreads, const int B, const int N, const int M,
+ const int K, const int O, const int aggregate,
+ const float* grad_out,
+ const float* scores,
+ const long* knn_idx,
+ float* grad_points,
+ float* grad_centers) {
+
+ // clock_t start, finish;
+ // start = clock();
+
+ // ----- parallel loop for M, O ---------
+ for (long i = blockIdx.x * blockDim.x + threadIdx.x; i < nthreads; i += blockDim.x * gridDim.x) {
+ int b = (int)(i / (M * O));
+ int m = (int)(i % (M * O) / O);
+ int o = (int)(i % O);
+
+ // ----- loop for N,K ---------
+ for (int n = 0; n < N; n++) {
+ for (int k = 0; k < K; k++) {
+ int kn = knn_idx[b*N*K + n*K + k];
+ atomicAdd(grad_points + b*N*M*O + kn*M*O + m*O + o,
+ scores[b*N*K*M + n*K*M + k*M + m] * grad_out[b*O*N + o*N + n]);
+ atomicAdd(grad_centers + b*N*M*O + n*M*O + m*O + o,
+ - scores[b*N*K*M + n*K*M + k*M + m] * grad_out[b*O*N + o*N + n]);
+ //grad_points[b*N*M*O + kn*M*O + m*O + o] += 2 * scores[b*N*K*M + n*K*M + k*M + m] * grad_out[b*O*N + o*N + n];
+ //grad_points[b*N*M*O + n*M*O + m*O + o] -= scores[b*N*K*M + n*K*M + k*M + m] * grad_out[b*O*N + o*N + n];
+ }
+ }
+ }
+ // finish = clock();
+ // printf("assign socre backward time 1οΌblockid %d, %f\n", batch_idx, (double)(finish - start)/10000.0);
+
+}
+
+
+__global__ void assign_score_withk_backward_scores_kernel(const int nthreads, const int B, const int N, const int M,
+ const int K, const int O, const int aggregate,
+ const float* grad_out,
+ const float* points,
+ const float* centers,
+ const long* knn_idx,
+ float* grad_scores) {
+
+ // clock_t start, finish;
+ // start = clock();
+
+ // ----- parallel loop for N, K, M ---------
+ for (long i = blockIdx.x * blockDim.x + threadIdx.x; i < nthreads; i += blockDim.x * gridDim.x) {
+ // for (int i = index; i < N*K*M; i += stride) {
+ int b = (int)(i / (N * M * K));
+ int n = (int)(i % (N * M * K) / M / K);
+ int k = (int)(i % (M * K) / M);
+ int m = (int)(i % M);
+ int kn = knn_idx[b*N*K + n*K + k];
+
+ for(int o = 0; o < O; o++) {
+ atomicAdd(grad_scores + b*N*K*M + n*K*M + k*M + m,
+ (points[b*N*M*O + kn*M*O + m*O + o]
+ - centers[b*N*M*O + n*M*O + m*O + o])* grad_out[b*O*N + o*N + n]);
+ // grad_scores[b*N*K*M + n*K*M + k*M + m] += (2 * points[b*N*M*O + kn*M*O + m*O + o] - points[b*N*M*O + n*M*O + m*O + o])* grad_out[b*O*N + o*N + n];
+ }
+ }
+
+ // finish = clock();
+ // printf("assign socre backward time 2οΌblockid %d, %f\n", batch_idx, (double)(finish - start)/10000.0);
+}
+
+
+void assign_score_withk_forward_kernel_wrapper(int B, int N, int M, int K, int O, int aggregate,
+ const at::Tensor& points,
+ const at::Tensor& centers,
+ const at::Tensor& scores,
+ const at::Tensor& knn_idx,
+ at::Tensor& output) {
+ CHECK_CONTIGUOUS(points);
+ CHECK_CONTIGUOUS(centers);
+ CHECK_CONTIGUOUS(scores);
+ CHECK_CONTIGUOUS(knn_idx);
+ CHECK_CONTIGUOUS(output);
+
+ const float* points_data = points.data_ptr();
+ const float* centers_data = centers.data_ptr();
+ const float* scores_data = scores.data_ptr();
+ const long* knn_idx_data = knn_idx.data_ptr();
+ float* output_data = output.data_ptr();
+
+ int nthreads = B * N * O; // * K * M;
+
+ assign_score_withk_forward_kernel<<>>(
+ nthreads, B, N, M, K, O, aggregate, points_data, centers_data, scores_data, knn_idx_data, output_data);
+
+ CUDA_CHECK_ERRORS();
+
+}
+
+
+void assign_score_withk_backward_kernel_wrapper(int B, int N, int M, int K, int O, int aggregate,
+ const at::Tensor& grad_out,
+ const at::Tensor& points,
+ const at::Tensor& centers,
+ const at::Tensor& scores,
+ const at::Tensor& knn_idx,
+ at::Tensor& grad_points,
+ at::Tensor& grad_centers,
+ at::Tensor& grad_scores) {
+
+ CHECK_CONTIGUOUS(grad_out);
+ CHECK_CONTIGUOUS(scores);
+ CHECK_CONTIGUOUS(points);
+ CHECK_CONTIGUOUS(centers);
+ CHECK_CONTIGUOUS(knn_idx);
+ CHECK_CONTIGUOUS(grad_scores);
+ CHECK_CONTIGUOUS(grad_points);
+ CHECK_CONTIGUOUS(grad_centers);
+
+ const float* grad_out_data = grad_out.data_ptr();
+ const float* points_data = points.data_ptr();
+ const float* centers_data = centers.data_ptr();
+ const float* scores_data = scores.data_ptr();
+ const long* knn_idx_data = knn_idx.data_ptr();
+ float* grad_points_data = grad_points.data_ptr();
+ float* grad_centers_data = grad_centers.data_ptr();
+ float* grad_scores_data = grad_scores.data_ptr();
+
+ cudaStream_t stream = at::cuda::getCurrentCUDAStream();
+
+ int nthreads_1 = B * M * O;
+ int nthreads_2 = B * N * K * M;
+
+ assign_score_withk_backward_points_kernel<<>>(
+ nthreads_1, B, N, M, K, O, aggregate, grad_out_data, scores_data, knn_idx_data, grad_points_data, grad_centers_data);
+ assign_score_withk_backward_scores_kernel<<>>(
+ nthreads_2, B, N, M, K, O, aggregate, grad_out_data, points_data, centers_data, knn_idx_data, grad_scores_data);
+
+ CUDA_CHECK_ERRORS();
+
+}
diff --git a/zoo/PAConv/obj_cls/cuda_lib/src/gpu/assign_score_withk_halfkernel_gpu.cu b/zoo/PAConv/obj_cls/cuda_lib/src/gpu/assign_score_withk_halfkernel_gpu.cu
new file mode 100644
index 0000000..839eb11
--- /dev/null
+++ b/zoo/PAConv/obj_cls/cuda_lib/src/gpu/assign_score_withk_halfkernel_gpu.cu
@@ -0,0 +1,213 @@
+#include
+#include
+#include
+#include
+#include "cuda_utils.h"
+#include "utils.h"
+
+
+// input: points(B,N,M,O), scores(B,N,K,M), idx(B,N,K)
+// ouput: fout(B,O,N)
+// algo: fout(b,i,k,j) = s(b,i,k,m)*p(b,i,k,m,j) = s(b,i,k,m)*p(b,i(k),m,j)
+// i(k) = idx(b,i,k)
+// sum: fout(b,i,j) = fout(b,i,j) + s(b,i,k,m)*p(b,i,k,m,j)
+// avg: fout(b,i,j) = sum(fout(b,i,k,j)) / k
+// max: fout(b,i,j) = max(fout(b,i,k,j), sum(s(b,i,k,m)*p(b,i,k,m,j)))
+// k,m : sequential
+// b,n: parallel
+
+const int SUM = 0;
+const int AVG = 1;
+const int MAX = 2;
+
+#ifndef _CLOCK_T_DEFINED
+typedef long clock_t;
+#define _CLOCK_T_DEFINED
+#endif
+
+__global__ void assign_score_withk_halfkernel_forward_kernel(const int nthreads, const int B, const int N, const int M,
+ const int K, const int O, const int aggregate,
+ const float* points,
+ const float* scores,
+ const long* knn_idx,
+ float* output) {
+
+ // clock_t start, finish;
+ // start = clock();s
+
+ // ----- parallel loop for B, N and O ---------
+ for (long i = blockIdx.x * blockDim.x + threadIdx.x; i < nthreads; i += blockDim.x * gridDim.x) {
+ // ----- loop for K ---------
+ for (int k = 0; k < K; k++) {
+ int b = (int)(i / (O * N));
+ int n = (int)(i % (O * N) / O);
+ int o = (int)(i % O);
+ float tmp = 0;
+ // ------- loop for M ----------
+ for (int m = 0; m < M; m++) {
+ int kn = (int) knn_idx[b*K*N + n*K + k];
+ assert (kn < N);
+ assert (o < O);
+ assert (n < N);
+
+ if (aggregate == SUM) {
+ // feature concat
+ // output[b*N*O + o*N + n] += 2 * points[b*N*M*O + kn*M*O + m*O + o] * scores[b*N*K*M + n*K*M + k*M + m];
+ // output[b*N*O + o*N + n] -= points[b*N*M*O + n*M*O + m*O + o] * scores[b*N*K*M + n*K*M + k*M + m];
+ atomicAdd(output + b*N*O + o*N + n,
+ 2 * points[b*N*M*O + kn*M*O + m*O + o] * scores[b*N*K*M + n*K*M + k*M + m]
+ - points[b*N*M*O + n*M*O + m*O + o] * scores[b*N*K*M + n*K*M + k*M + m]);
+ }
+ else if (aggregate == AVG) {
+ atomicAdd(output + b*N*O + o*N + n,
+ (2 * points[b*N*M*O + kn*M*O + m*O + o] * scores[b*N*K*M + n*K*M + k*M + m]
+ - points[b*N*M*O + n*M*O + m*O + o] * scores[b*N*K*M + n*K*M + k*M + m]) / K);
+ }
+ else if (aggregate == MAX) {
+ atomicAdd(&tmp,
+ 2 * points[b*N*M*O + kn*M*O + m*O + o] * scores[b*N*K*M + n*K*M + k*M + m]
+ - points[b*N*M*O + n*M*O + m*O + o] * scores[b*N*K*M + n*K*M + k*M + m]);
+ }
+ }
+
+ if (aggregate == MAX) {
+ output[b*N*O + o*N + n] = output[b*N*O + o*N + n] > tmp ? output[b*N*O + o*N + n] : tmp;
+ }
+ }
+ }
+
+ // finish = clock();
+ // printf("assign socre forward timeοΌblockid %d, %f\n", batch_idx, (double)(finish - start)/10000.0);
+}
+
+
+__global__ void assign_score_withk_halfkernel_backward_points_kernel(const int nthreads, const int B, const int N, const int M,
+ const int K, const int O, const int aggregate,
+ const float* grad_out,
+ const float* points,
+ const float* scores,
+ const long* knn_idx,
+ float* grad_points) {
+
+ // clock_t start, finish;
+ // start = clock();
+
+ // ----- parallel loop for M, O ---------
+ for (long i = blockIdx.x * blockDim.x + threadIdx.x; i < nthreads; i += blockDim.x * gridDim.x) {
+ int b = (int)(i / (M * O));
+ int m = (int)(i % (M * O) / O);
+ int o = (int)(i % O);
+
+ // ----- loop for N,K ---------
+ for (int n = 0; n < N; n++) {
+ for (int k = 0; k < K; k++) {
+ int kn = knn_idx[b*N*K + n*K + k];
+ atomicAdd(grad_points + b*N*M*O + kn*M*O + m*O + o,
+ 2 * scores[b*N*K*M + n*K*M + k*M + m] * grad_out[b*O*N + o*N + n]);
+ atomicAdd(grad_points + b*N*M*O + n*M*O + m*O + o,
+ - scores[b*N*K*M + n*K*M + k*M + m] * grad_out[b*O*N + o*N + n]);
+ // grad_points[b*N*M*O + kn*M*O + m*O + o] += 2 * scores[b*N*K*M + n*K*M + k*M + m] * grad_out[b*O*N + o*N + n];
+ // grad_points[b*N*M*O + n*M*O + m*O + o] -= scores[b*N*K*M + n*K*M + k*M + m] * grad_out[b*O*N + o*N + n];
+ }
+ }
+
+ }
+ // finish = clock();
+ // printf("assign socre backward time 1οΌblockid %d, %f\n", batch_idx, (double)(finish - start)/10000.0);
+
+}
+
+
+__global__ void assign_score_withk_halfkernel_backward_scores_kernel(const int nthreads, const int B, const int N, const int M,
+ const int K, const int O, const int aggregate,
+ const float* grad_out,
+ const float* points,
+ const float* scores,
+ const long* knn_idx,
+ float* grad_scores) {
+
+ // clock_t start, finish;
+ // start = clock();
+
+ // ----- parallel loop for N, K, M ---------
+ for (long i = blockIdx.x * blockDim.x + threadIdx.x; i < nthreads; i += blockDim.x * gridDim.x) {
+ // for (int i = index; i < N*K*M; i += stride) {
+ int b = (int)(i / (N * M * K));
+ int n = (int)(i % (N * M * K) / M / K);
+ int k = (int)(i % (M * K) / M);
+ int m = (int)(i % M);
+ int kn = knn_idx[b*N*K + n*K + k];
+
+ for(int o = 0; o < O; o++) {
+ atomicAdd(grad_scores + b*N*K*M + n*K*M + k*M + m,
+ (2 * points[b*N*M*O + kn*M*O + m*O + o]
+ - points[b*N*M*O + n*M*O + m*O + o])* grad_out[b*O*N + o*N + n]);
+ // grad_scores[b*N*K*M + n*K*M + k*M + m] += (2 * points[b*N*M*O + kn*M*O + m*O + o] - points[b*N*M*O + n*M*O + m*O + o])* grad_out[b*O*N + o*N + n];
+ }
+ }
+
+ // finish = clock();
+ // printf("assign socre backward time 2οΌblockid %d, %f\n", batch_idx, (double)(finish - start)/10000.0);
+}
+
+
+void assign_score_withk_halfkernel_forward_kernel_wrapper(int B, int N, int M, int K, int O, int aggregate,
+ const at::Tensor& points,
+ const at::Tensor& scores,
+ const at::Tensor& knn_idx,
+ at::Tensor& output) {
+ CHECK_CONTIGUOUS(points);
+ CHECK_CONTIGUOUS(scores);
+ CHECK_CONTIGUOUS(knn_idx);
+ CHECK_CONTIGUOUS(output);
+
+ const float* points_data = points.data_ptr();
+ const float* scores_data = scores.data_ptr();
+ const long* knn_idx_data = knn_idx.data_ptr();
+ float* output_data = output.data_ptr();
+
+ int nthreads = B * N * O; // * K * M;
+
+ assign_score_withk_halfkernel_forward_kernel<<>>(
+ nthreads, B, N, M, K, O, aggregate, points_data, scores_data, knn_idx_data, output_data);
+
+ CUDA_CHECK_ERRORS();
+
+}
+
+
+void assign_score_withk_halfkernel_backward_kernel_wrapper(int B, int N, int M, int K, int O, int aggregate,
+ const at::Tensor& grad_out,
+ const at::Tensor& points,
+ const at::Tensor& scores,
+ const at::Tensor& knn_idx,
+ at::Tensor& grad_points,
+ at::Tensor& grad_scores) {
+
+ CHECK_CONTIGUOUS(grad_out);
+ CHECK_CONTIGUOUS(scores);
+ CHECK_CONTIGUOUS(points);
+ CHECK_CONTIGUOUS(knn_idx);
+ CHECK_CONTIGUOUS(grad_scores);
+ CHECK_CONTIGUOUS(grad_points);
+
+ const float* grad_out_data = grad_out.data_ptr();
+ const float* points_data = points.data_ptr();
+ const float* scores_data = scores.data_ptr();
+ const long* knn_idx_data = knn_idx.data_ptr();
+ float* grad_points_data = grad_points.data_ptr();
+ float* grad_scores_data = grad_scores.data_ptr();
+
+ cudaStream_t stream = at::cuda::getCurrentCUDAStream();
+
+ int nthreads_1 = B * M * O;
+ int nthreads_2 = B * N * K * M;
+
+ assign_score_withk_halfkernel_backward_points_kernel<<>>(
+ nthreads_1, B, N, M, K, O, aggregate, grad_out_data, points_data, scores_data, knn_idx_data, grad_points_data);
+ assign_score_withk_halfkernel_backward_scores_kernel<<>>(
+ nthreads_2, B, N, M, K, O, aggregate, grad_out_data, points_data, scores_data, knn_idx_data, grad_scores_data);
+
+ CUDA_CHECK_ERRORS();
+
+}
diff --git a/zoo/PAConv/obj_cls/cuda_lib/src/gpu/cuda_utils.h b/zoo/PAConv/obj_cls/cuda_lib/src/gpu/cuda_utils.h
new file mode 100644
index 0000000..dbce9e0
--- /dev/null
+++ b/zoo/PAConv/obj_cls/cuda_lib/src/gpu/cuda_utils.h
@@ -0,0 +1,41 @@
+#ifndef _CUDA_UTILS_H
+#define _CUDA_UTILS_H
+
+#include
+#include
+#include
+
+#include
+#include
+
+#include
+
+#define TOTAL_THREADS 512
+
+inline int opt_n_threads(int work_size) {
+ const int pow_2 = std::log(static_cast(work_size)) / std::log(2.0);
+
+ return max(min(1 << pow_2, TOTAL_THREADS), 1);
+}
+
+inline dim3 opt_block_config(int x, int y) {
+ const int x_threads = opt_n_threads(x);
+ const int y_threads =
+ max(min(opt_n_threads(y), TOTAL_THREADS / x_threads), 1);
+ dim3 block_config(x_threads, y_threads, 1);
+
+ return block_config;
+}
+
+#define CUDA_CHECK_ERRORS() \
+ do { \
+ cudaError_t err = cudaGetLastError(); \
+ if (cudaSuccess != err) { \
+ fprintf(stderr, "CUDA kernel failed : %s\n%s at L:%d in %s\n", \
+ cudaGetErrorString(err), __PRETTY_FUNCTION__, __LINE__, \
+ __FILE__); \
+ exit(-1); \
+ } \
+ } while (0)
+
+#endif
diff --git a/zoo/PAConv/obj_cls/cuda_lib/src/gpu/operator.cpp b/zoo/PAConv/obj_cls/cuda_lib/src/gpu/operator.cpp
new file mode 100644
index 0000000..4232e50
--- /dev/null
+++ b/zoo/PAConv/obj_cls/cuda_lib/src/gpu/operator.cpp
@@ -0,0 +1,8 @@
+#include "operator.h"
+
+PYBIND11_MODULE(TORCH_EXTENSION_NAME, m) {
+ m.def("assign_score_withk_forward_cuda", &assign_score_withk_forward_kernel_wrapper, "Assign score kernel forward (GPU), save memory version");
+ m.def("assign_score_withk_backward_cuda", &assign_score_withk_backward_kernel_wrapper, "Assign score kernel backward (GPU), save memory version");
+ m.def("assign_score_withk_halfkernel_forward_cuda", &assign_score_withk_halfkernel_forward_kernel_wrapper, "Assign score kernel forward (GPU) with half kernel, save memory version");
+ m.def("assign_score_withk_halfkernel_backward_cuda", &assign_score_withk_halfkernel_backward_kernel_wrapper, "Assign score kernel backward (GPU) with half kernel, save memory version");
+}
\ No newline at end of file
diff --git a/zoo/PAConv/obj_cls/cuda_lib/src/gpu/operator.h b/zoo/PAConv/obj_cls/cuda_lib/src/gpu/operator.h
new file mode 100644
index 0000000..2784337
--- /dev/null
+++ b/zoo/PAConv/obj_cls/cuda_lib/src/gpu/operator.h
@@ -0,0 +1,42 @@
+//
+// Created by Runyu Ding on 2020/8/12.
+//
+
+#ifndef _OPERATOR_H
+#define _OPERATOR_H
+
+#include
+
+void assign_score_withk_halfkernel_forward_kernel_wrapper(int B, int N, int M, int K, int O, int aggregate,
+ const at::Tensor& points,
+ const at::Tensor& scores,
+ const at::Tensor& knn_idx,
+ at::Tensor& output);
+
+void assign_score_withk_halfkernel_backward_kernel_wrapper(int B, int N, int M, int K, int O, int aggregate,
+ const at::Tensor& grad_out,
+ const at::Tensor& points,
+ const at::Tensor& scores,
+ const at::Tensor& knn_idx,
+ at::Tensor& grad_points,
+ at::Tensor& grad_scores);
+
+void assign_score_withk_forward_kernel_wrapper(int B, int N, int M, int K, int O, int aggregate,
+ const at::Tensor& points,
+ const at::Tensor& centers,
+ const at::Tensor& scores,
+ const at::Tensor& knn_idx,
+ at::Tensor& output);
+
+void assign_score_withk_backward_kernel_wrapper(int B, int N, int M, int K, int O, int aggregate,
+ const at::Tensor& grad_out,
+ const at::Tensor& points,
+ const at::Tensor& centers,
+ const at::Tensor& scores,
+ const at::Tensor& knn_idx,
+ at::Tensor& grad_points,
+ at::Tensor& grad_centers,
+ at::Tensor& grad_scores);
+
+
+#endif
\ No newline at end of file
diff --git a/zoo/PAConv/obj_cls/cuda_lib/src/gpu/utils.h b/zoo/PAConv/obj_cls/cuda_lib/src/gpu/utils.h
new file mode 100644
index 0000000..5f080ed
--- /dev/null
+++ b/zoo/PAConv/obj_cls/cuda_lib/src/gpu/utils.h
@@ -0,0 +1,25 @@
+#pragma once
+#include
+#include
+
+#define CHECK_CUDA(x) \
+ do { \
+ AT_ASSERT(x.is_cuda(), #x " must be a CUDA tensor"); \
+ } while (0)
+
+#define CHECK_CONTIGUOUS(x) \
+ do { \
+ AT_ASSERT(x.is_contiguous(), #x " must be a contiguous tensor"); \
+ } while (0)
+
+#define CHECK_IS_INT(x) \
+ do { \
+ AT_ASSERT(x.scalar_type() == at::ScalarType::Int, \
+ #x " must be an int tensor"); \
+ } while (0)
+
+#define CHECK_IS_FLOAT(x) \
+ do { \
+ AT_ASSERT(x.scalar_type() == at::ScalarType::Float, \
+ #x " must be a float tensor"); \
+ } while (0)
diff --git a/zoo/PAConv/obj_cls/eval_voting.py b/zoo/PAConv/obj_cls/eval_voting.py
new file mode 100755
index 0000000..68cfe24
--- /dev/null
+++ b/zoo/PAConv/obj_cls/eval_voting.py
@@ -0,0 +1,132 @@
+from __future__ import print_function
+import os
+import argparse
+import torch
+import torch.nn as nn
+import torch.nn.functional as F
+from util.data_util import ModelNet40 as ModelNet40
+import numpy as np
+from torch.utils.data import DataLoader
+from util.util import IOStream, load_cfg_from_cfg_file, merge_cfg_from_list
+import sklearn.metrics as metrics
+import random
+
+
+def get_parser():
+ parser = argparse.ArgumentParser(description='3D Object Classification')
+ parser.add_argument('--config', type=str, default='config/dgcnn_paconv_train.yaml', help='config file')
+ parser.add_argument('opts', help='see config/dgcnn_paconv_train.yaml for all options', default=None, nargs=argparse.REMAINDER)
+ args = parser.parse_args()
+ assert args.config is not None
+ cfg = load_cfg_from_cfg_file(args.config)
+ if args.opts is not None:
+ cfg = merge_cfg_from_list(cfg, args.opts)
+
+ cfg['manual_seed'] = cfg.get('manual_seed', 0)
+ cfg['workers'] = cfg.get('workers', 6)
+ return cfg
+
+
+def _init_():
+ if not os.path.exists('checkpoints'):
+ os.makedirs('checkpoints')
+ if not os.path.exists('checkpoints/'+args.exp_name):
+ os.makedirs('checkpoints/'+args.exp_name)
+
+ # backup the running files:
+ os.system('cp eval_voting.py checkpoints' + '/' + args.exp_name + '/' + 'eval_voting.py.backup')
+
+
+class PointcloudScale(object): # input random scaling
+ def __init__(self, scale_low=2. / 3., scale_high=3. / 2.):
+ self.scale_low = scale_low
+ self.scale_high = scale_high
+
+ def __call__(self, pc):
+ bsize = pc.size()[0]
+ for i in range(bsize):
+ xyz = np.random.uniform(low=self.scale_low, high=self.scale_high, size=[3])
+ scales = torch.from_numpy(xyz).float().cuda()
+ pc[i, :, 0:3] = torch.mul(pc[i, :, 0:3], scales)
+ return pc
+
+
+def test(args, io):
+ test_loader = DataLoader(ModelNet40(partition='test', num_points=args.num_points, pt_norm=False), num_workers=args.workers,
+ batch_size=args.test_batch_size, shuffle=False, drop_last=False)
+
+ device = torch.device("cuda" if args.cuda else "cpu")
+ NUM_PEPEAT = 300
+ NUM_VOTE = 10
+
+ # Try to load models:
+ if args.arch == 'dgcnn':
+ from model.DGCNN_PAConv import PAConv
+ model = PAConv(args).to(device)
+ elif args.arch == 'pointnet':
+ from model.PointNet_PAConv import PAConv
+ model = PAConv(args).to(device)
+ else:
+ raise Exception("Not implemented")
+
+ model = nn.DataParallel(model)
+ model.load_state_dict(torch.load("checkpoints/%s/best_model.t7" % args.exp_name))
+ model = model.eval()
+ best_acc = 0
+
+ pointscale = PointcloudScale(scale_low=0.8, scale_high=1.18) # set the range of scaling
+
+ for i in range(NUM_PEPEAT):
+ test_true = []
+ test_pred = []
+
+ for data, label in test_loader:
+ data, label = data.to(device), label.to(device).squeeze()
+ pred = 0
+ for v in range(NUM_VOTE):
+ new_data = data
+ batch_size = data.size()[0]
+ if v > 0:
+ new_data.data = pointscale(new_data.data)
+ with torch.no_grad():
+ pred += F.softmax(model(new_data.permute(0, 2, 1)), dim=1) # sum 10 preds
+ pred /= NUM_VOTE # avg the preds!
+ label = label.view(-1)
+ pred_choice = pred.max(dim=1)[1]
+ test_true.append(label.cpu().numpy())
+ test_pred.append(pred_choice.detach().cpu().numpy())
+ test_true = np.concatenate(test_true)
+ test_pred = np.concatenate(test_pred)
+ test_acc = metrics.accuracy_score(test_true, test_pred)
+ if test_acc > best_acc:
+ best_acc = test_acc
+ outstr = 'Voting %d, test acc: %.6f,' % (i, test_acc * 100)
+ io.cprint(outstr)
+
+ final_outstr = 'Final voting test acc: %.6f,' % (best_acc * 100)
+ io.cprint(final_outstr)
+
+
+if __name__ == "__main__":
+ args = get_parser()
+ _init_()
+
+ io = IOStream('checkpoints/' + args.exp_name + '/%s_voting.log' % (args.exp_name))
+ io.cprint(str(args))
+
+ if args.manual_seed is not None:
+ random.seed(args.manual_seed)
+ np.random.seed(args.manual_seed)
+ torch.manual_seed(args.manual_seed)
+
+ args.cuda = not args.no_cuda and torch.cuda.is_available()
+
+ if args.cuda:
+ io.cprint('Using GPU')
+ if args.manual_seed is not None:
+ torch.cuda.manual_seed(args.manual_seed)
+ torch.cuda.manual_seed_all(args.manual_seed)
+ else:
+ io.cprint('Using CPU')
+
+ test(args, io)
diff --git a/zoo/PAConv/obj_cls/main.py b/zoo/PAConv/obj_cls/main.py
new file mode 100755
index 0000000..442a777
--- /dev/null
+++ b/zoo/PAConv/obj_cls/main.py
@@ -0,0 +1,288 @@
+from __future__ import print_function
+import os
+import argparse
+import torch
+import torch.nn as nn
+import torch.optim as optim
+from torch.optim.lr_scheduler import CosineAnnealingLR
+from util.data_util import ModelNet40 as ModelNet40
+import numpy as np
+from torch.utils.data import DataLoader
+from util.util import cal_loss, IOStream, load_cfg_from_cfg_file, merge_cfg_from_list
+import sklearn.metrics as metrics
+from tensorboardX import SummaryWriter
+import random
+from modelnetc_utils import eval_corrupt_wrapper, ModelNetC
+
+def get_parser():
+ parser = argparse.ArgumentParser(description='3D Object Classification')
+ parser.add_argument('--config', type=str, default='config/dgcnn_paconv.yaml', help='config file')
+ parser.add_argument('--model_path', type=str, default='', help='path to pretrained model')
+ parser.add_argument('--eval_corrupt', type=bool, default=False, help='if test under corruption')
+ parser.add_argument('opts', help='see config/dgcnn_paconv.yaml for all options', default=None, nargs=argparse.REMAINDER)
+ args = parser.parse_args()
+ assert args.config is not None
+ cfg = load_cfg_from_cfg_file(args.config)
+ if args.opts is not None:
+ cfg = merge_cfg_from_list(cfg, args.opts)
+
+ cfg['manual_seed'] = cfg.get('manual_seed', 0)
+ cfg['workers'] = cfg.get('workers', 6)
+ cfg['eval'] = cfg.get('eval', False) and not args.eval_corrupt
+ cfg['eval_corrupt'] = args.eval_corrupt
+ cfg['model_path'] = args.model_path
+ return cfg
+
+
+def _init_():
+ if not os.path.exists('checkpoints'):
+ os.makedirs('checkpoints')
+ if not os.path.exists('checkpoints/'+args.exp_name):
+ os.makedirs('checkpoints/'+args.exp_name)
+
+ if not args.eval and not args.eval_corrupt: # backup the running files
+ os.system('cp main.py checkpoints' + '/' + args.exp_name + '/' + 'main.py.backup')
+ os.system('cp util/PAConv_util.py checkpoints' + '/' + args.exp_name + '/' + 'PAConv_util.py.backup')
+ os.system('cp util/data_util.py checkpoints' + '/' + args.exp_name + '/' + 'data_util.py.backup')
+ if args.arch == 'dgcnn':
+ os.system('cp model/DGCNN_PAConv.py checkpoints' + '/' + args.exp_name + '/' + 'DGCNN_PAConv.py.backup')
+ elif args.arch == 'pointnet':
+ os.system('cp model/PointNet_PAConv.py checkpoints' + '/' + args.exp_name + '/' + 'PointNet_PAConv.py.backup')
+
+ global writer
+ writer = SummaryWriter('checkpoints/' + args.exp_name)
+
+
+# weight initialization:
+def weight_init(m):
+ if isinstance(m, torch.nn.Linear):
+ torch.nn.init.kaiming_normal_(m.weight)
+ if m.bias is not None:
+ torch.nn.init.constant_(m.bias, 0)
+ elif isinstance(m, torch.nn.Conv2d):
+ torch.nn.init.kaiming_normal_(m.weight)
+ if m.bias is not None:
+ torch.nn.init.constant_(m.bias, 0)
+ elif isinstance(m, torch.nn.Conv1d):
+ torch.nn.init.kaiming_normal_(m.weight)
+ if m.bias is not None:
+ torch.nn.init.constant_(m.bias, 0)
+ elif isinstance(m, torch.nn.BatchNorm2d):
+ torch.nn.init.constant_(m.weight, 1)
+ torch.nn.init.constant_(m.bias, 0)
+ elif isinstance(m, torch.nn.BatchNorm1d):
+ torch.nn.init.constant_(m.weight, 1)
+ torch.nn.init.constant_(m.bias, 0)
+
+
+def train(args, io):
+ train_loader = DataLoader(ModelNet40(partition='train', num_points=args.num_points, pt_norm=args.pt_norm),
+ num_workers=args.workers, batch_size=args.batch_size, shuffle=True, drop_last=True)
+ test_loader = DataLoader(ModelNet40(partition='test', num_points=args.num_points, pt_norm=False),
+ num_workers=args.workers, batch_size=args.test_batch_size, shuffle=False, drop_last=False)
+
+ device = torch.device("cuda" if args.cuda else "cpu")
+
+ if args.arch == 'dgcnn':
+ from model.DGCNN_PAConv import PAConv
+ model = PAConv(args).to(device)
+ elif args.arch == 'pointnet':
+ from model.PointNet_PAConv import PAConv
+ model = PAConv(args).to(device)
+ else:
+ raise Exception("Not implemented")
+
+ io.cprint(str(model))
+
+ model.apply(weight_init)
+ model = nn.DataParallel(model)
+ print("Let's use", torch.cuda.device_count(), "GPUs!")
+
+ print("Use SGD")
+ opt = optim.SGD(model.parameters(), lr=args.lr, momentum=args.momentum, weight_decay=1e-4)
+ scheduler = CosineAnnealingLR(opt, args.epochs, eta_min=args.lr/100)
+
+ criterion = cal_loss
+
+ best_test_acc = 0
+
+ for epoch in range(args.epochs):
+ scheduler.step()
+ ####################
+ # Train
+ ####################
+ train_loss = 0.0
+ count = 0.0
+ model.train()
+ train_pred = []
+ train_true = []
+ for data, label in train_loader:
+ data, label = data.to(device), label.to(device).squeeze()
+ data = data.permute(0, 2, 1)
+ batch_size = data.size()[0]
+ opt.zero_grad()
+ logits = model(data)
+ loss = criterion(logits, label)
+ loss.backward()
+ opt.step()
+ preds = logits.max(dim=1)[1]
+ count += batch_size
+ train_loss += loss.item() * batch_size
+ train_true.append(label.cpu().numpy())
+ train_pred.append(preds.detach().cpu().numpy())
+ train_true = np.concatenate(train_true)
+ train_pred = np.concatenate(train_pred)
+ train_acc = metrics.accuracy_score(train_true, train_pred)
+ outstr = 'Train %d, loss: %.6f, train acc: %.6f, ' % (epoch, train_loss * 1.0 / count, train_acc)
+ io.cprint(outstr)
+
+ writer.add_scalar('loss_train', train_loss * 1.0 / count, epoch + 1)
+ writer.add_scalar('Acc_train', train_acc, epoch + 1)
+
+ ####################
+ # Test
+ ####################
+ test_loss = 0.0
+ count = 0.0
+ model.eval()
+ test_pred = []
+ test_true = []
+ for data, label in test_loader:
+ data, label = data.to(device), label.to(device).squeeze()
+ data = data.permute(0, 2, 1)
+ batch_size = data.size()[0]
+ logits = model(data)
+ loss = criterion(logits, label)
+ preds = logits.max(dim=1)[1]
+ count += batch_size
+ test_loss += loss.item() * batch_size
+ test_true.append(label.cpu().numpy())
+ test_pred.append(preds.detach().cpu().numpy())
+ test_true = np.concatenate(test_true)
+ test_pred = np.concatenate(test_pred)
+ test_acc = metrics.accuracy_score(test_true, test_pred)
+ outstr = 'Test %d, loss: %.6f, test acc: %.6f,' % (epoch, test_loss * 1.0 / count, test_acc)
+ io.cprint(outstr)
+
+ writer.add_scalar('loss_test', test_loss * 1.0 / count, epoch + 1)
+ writer.add_scalar('Acc_test', test_acc, epoch + 1)
+
+ if test_acc >= best_test_acc:
+ best_test_acc = test_acc
+ io.cprint('Max Acc:%.6f' % best_test_acc)
+ torch.save(model.state_dict(), 'checkpoints/%s/best_model.t7' % args.exp_name)
+
+
+def test(args, io):
+ test_loader = DataLoader(ModelNet40(partition='test', num_points=args.num_points, pt_norm=False),
+ batch_size=args.test_batch_size, shuffle=False, drop_last=False)
+
+ device = torch.device("cuda" if args.cuda else "cpu")
+
+ # Try to load models:
+ if args.arch == 'dgcnn':
+ from model.DGCNN_PAConv import PAConv
+ model = PAConv(args).to(device)
+ elif args.arch == 'pointnet':
+ from model.PointNet_PAConv import PAConv
+ model = PAConv(args).to(device)
+ else:
+ raise Exception("Not implemented")
+
+ io.cprint(str(model))
+
+ model = nn.DataParallel(model)
+ model.load_state_dict(torch.load("checkpoints/%s/best_model.t7" % args.exp_name))
+ model = model.eval()
+ test_acc = 0.0
+ count = 0.0
+ test_true = []
+ test_pred = []
+ for data, label in test_loader:
+
+ data, label = data.to(device), label.to(device).squeeze()
+ data = data.permute(0, 2, 1)
+ batch_size = data.size()[0]
+ with torch.no_grad():
+ logits = model(data)
+ preds = logits.max(dim=1)[1]
+ test_true.append(label.cpu().numpy())
+ test_pred.append(preds.detach().cpu().numpy())
+ test_true = np.concatenate(test_true)
+ test_pred = np.concatenate(test_pred)
+ test_acc = metrics.accuracy_score(test_true, test_pred)
+ avg_per_class_acc = metrics.balanced_accuracy_score(test_true, test_pred)
+ outstr = 'Test :: test acc: %.6f, test avg acc: %.6f' % (test_acc, avg_per_class_acc)
+ io.cprint(outstr)
+
+
+if __name__ == "__main__":
+ args = get_parser()
+ _init_()
+
+ if args.eval_corrupt or args.eval:
+ io = IOStream('checkpoints/' + args.exp_name + '/%s_test.log' % (args.exp_name))
+ else:
+ io = IOStream('checkpoints/' + args.exp_name + '/%s_train.log' % (args.exp_name))
+
+ io.cprint(str(args))
+
+ if args.manual_seed is not None:
+ random.seed(args.manual_seed)
+ np.random.seed(args.manual_seed)
+ torch.manual_seed(args.manual_seed)
+
+ args.cuda = not args.no_cuda and torch.cuda.is_available()
+
+ if args.cuda:
+ io.cprint('Using GPU')
+ if args.manual_seed is not None:
+ torch.cuda.manual_seed(args.manual_seed)
+ torch.cuda.manual_seed_all(args.manual_seed)
+ else:
+ io.cprint('Using CPU')
+
+ if not args.eval and not args.eval_corrupt:
+ train(args, io)
+ elif args.eval:
+ with torch.no_grad():
+ test(args, io)
+ elif args.eval_corrupt:
+ with torch.no_grad():
+ device = torch.device("cuda" if args.cuda else "cpu")
+ # Try to load models:
+ if args.arch == 'dgcnn':
+ from model.DGCNN_PAConv import PAConv
+ model = PAConv(args).to(device)
+ elif args.arch == 'pointnet':
+ from model.PointNet_PAConv import PAConv
+ model = PAConv(args).to(device)
+ else:
+ raise Exception("Not implemented")
+ model = nn.DataParallel(model)
+ if args.model_path == '':
+ model.load_state_dict(torch.load("checkpoints/%s/best_model.t7" % args.exp_name))
+ else:
+ model.load_state_dict(torch.load(args.model_path))
+ model = model.eval()
+
+ def test_corrupt(args, split, model):
+ test_loader = DataLoader(ModelNetC(split=split),
+ batch_size=args.test_batch_size, shuffle=True, drop_last=False)
+ test_true = []
+ test_pred = []
+ for data, label in test_loader:
+ data, label = data.to(device), label.to(device).squeeze()
+ data = data.permute(0, 2, 1)
+ logits = model(data)
+ preds = logits.max(dim=1)[1]
+ test_true.append(label.cpu().numpy())
+ test_pred.append(preds.detach().cpu().numpy())
+ test_true = np.concatenate(test_true)
+ test_pred = np.concatenate(test_pred)
+ test_acc = metrics.accuracy_score(test_true, test_pred)
+ avg_per_class_acc = metrics.balanced_accuracy_score(test_true, test_pred)
+ return {'acc': test_acc, 'avg_per_class_acc': avg_per_class_acc}
+
+
+ eval_corrupt_wrapper(model, test_corrupt, {'args': args})
diff --git a/zoo/PAConv/obj_cls/main_ddp.py b/zoo/PAConv/obj_cls/main_ddp.py
new file mode 100755
index 0000000..335c904
--- /dev/null
+++ b/zoo/PAConv/obj_cls/main_ddp.py
@@ -0,0 +1,467 @@
+from __future__ import print_function
+import os
+import argparse
+import torch
+import torch.nn as nn
+import torch.optim as optim
+import torch.multiprocessing as mp
+import torch.distributed as dist
+import torch.backends.cudnn as cudnn
+from torch.optim.lr_scheduler import CosineAnnealingLR
+from util.data_util import ModelNet40 as ModelNet40
+import numpy as np
+from util.util import cal_loss, load_cfg_from_cfg_file, merge_cfg_from_list, find_free_port, AverageMeter, intersectionAndUnionGPU
+import time
+import logging
+import random
+from tensorboardX import SummaryWriter
+
+
+def get_logger():
+ logger_name = "main-logger"
+ logger = logging.getLogger(logger_name)
+ logger.setLevel(logging.INFO)
+ handler = logging.StreamHandler()
+ fmt = "[%(asctime)s %(levelname)s %(filename)s line %(lineno)d %(process)d] %(message)s"
+ handler.setFormatter(logging.Formatter(fmt))
+ logger.addHandler(handler)
+
+ file_handler = logging.FileHandler(os.path.join('checkpoints', args.exp_name, 'main-' + str(int(time.time())) + '.log'))
+ file_handler.setFormatter(logging.Formatter(fmt))
+ logger.addHandler(file_handler)
+
+ return logger
+
+
+def get_parser():
+ parser = argparse.ArgumentParser(description='3D Object Classification')
+ parser.add_argument('--config', type=str, default='config/dgcnn_paconv.yaml', help='config file')
+ parser.add_argument('opts', help='see config/dgcnn_paconv.yaml for all options', default=None, nargs=argparse.REMAINDER)
+ args = parser.parse_args()
+ assert args.config is not None
+ cfg = load_cfg_from_cfg_file(args.config)
+ if args.opts is not None:
+ cfg = merge_cfg_from_list(cfg, args.opts)
+
+ cfg['classes'] = cfg.get('classes', 40)
+ cfg['sync_bn'] = cfg.get('sync_bn', True)
+ cfg['dist_url'] = cfg.get('dist_url', 'tcp://127.0.0.1:6789')
+ cfg['dist_backend'] = cfg.get('dist_backend', 'nccl')
+ cfg['multiprocessing_distributed'] = cfg.get('multiprocessing_distributed', True)
+ cfg['world_size'] = cfg.get('world_size', 1)
+ cfg['rank'] = cfg.get('rank', 0)
+ cfg['manual_seed'] = cfg.get('manual_seed', 0)
+ cfg['workers'] = cfg.get('workers', 6)
+ cfg['print_freq'] = cfg.get('print_freq', 10)
+ return cfg
+
+
+def worker_init_fn(worker_id):
+ random.seed(args.manual_seed + worker_id)
+
+
+def main_process():
+ return not args.multiprocessing_distributed or (args.multiprocessing_distributed and args.rank % args.ngpus_per_node == 0)
+
+
+# weight initialization:
+def weight_init(m):
+ if isinstance(m, torch.nn.Linear):
+ torch.nn.init.kaiming_normal_(m.weight)
+ if m.bias is not None:
+ torch.nn.init.constant_(m.bias, 0)
+ elif isinstance(m, torch.nn.Conv2d):
+ torch.nn.init.kaiming_normal_(m.weight)
+ if m.bias is not None:
+ torch.nn.init.constant_(m.bias, 0)
+ elif isinstance(m, torch.nn.Conv1d):
+ torch.nn.init.kaiming_normal_(m.weight)
+ if m.bias is not None:
+ torch.nn.init.constant_(m.bias, 0)
+ elif isinstance(m, torch.nn.BatchNorm2d):
+ torch.nn.init.constant_(m.weight, 1)
+ torch.nn.init.constant_(m.bias, 0)
+ elif isinstance(m, torch.nn.BatchNorm1d):
+ torch.nn.init.constant_(m.weight, 1)
+ torch.nn.init.constant_(m.bias, 0)
+
+
+def train(gpu, ngpus_per_node):
+
+ # ============= Model ===================
+ if args.arch == 'dgcnn':
+ from model.DGCNN_PAConv import PAConv
+ model = PAConv(args)
+ elif args.arch == 'pointnet':
+ from model.PointNet_PAConv import PAConv
+ model = PAConv(args)
+ else:
+ raise Exception("Not implemented")
+
+ model.apply(weight_init)
+
+ if main_process():
+ logger.info(model)
+
+ if args.sync_bn and args.distributed:
+ model = nn.SyncBatchNorm.convert_sync_batchnorm(model)
+
+ if args.distributed:
+ torch.cuda.set_device(gpu)
+ args.batch_size = int(args.batch_size / ngpus_per_node)
+ args.test_batch_size = int(args.test_batch_size / ngpus_per_node)
+ args.workers = int((args.workers + ngpus_per_node - 1) / ngpus_per_node)
+ model = torch.nn.parallel.DistributedDataParallel(model.cuda(), device_ids=[gpu], find_unused_parameters=True)
+ else:
+ model = torch.nn.DataParallel(model.cuda())
+
+ # =========== Dataloader =================
+ train_data = ModelNet40(partition='train', num_points=args.num_points, pt_norm=args.pt_norm)
+ test_data = ModelNet40(partition='test', num_points=args.num_points, pt_norm=False)
+
+ if args.distributed:
+ train_sampler = torch.utils.data.distributed.DistributedSampler(train_data)
+ test_sampler = torch.utils.data.distributed.DistributedSampler(test_data)
+ else:
+ train_sampler = None
+ test_sampler = None
+
+ train_loader = torch.utils.data.DataLoader(train_data, batch_size=args.batch_size, shuffle=(train_sampler is None),
+ num_workers=args.workers, pin_memory=True, sampler=train_sampler,
+ drop_last=True)
+ test_loader = torch.utils.data.DataLoader(test_data, batch_size=args.test_batch_size, shuffle=False,
+ num_workers=args.workers, pin_memory=True, sampler=test_sampler)
+
+ # ============= Optimizer ===================
+ if main_process():
+ logger.info("Use SGD")
+ opt = optim.SGD(model.parameters(), lr=args.lr, momentum=args.momentum, weight_decay=1e-4)
+ scheduler = CosineAnnealingLR(opt, args.epochs, eta_min=args.lr/100)
+
+ criterion = cal_loss
+ best_test_acc = 0
+ start_epoch = 0
+
+ # ============= Training from scratch=================
+ for epoch in range(start_epoch, args.epochs):
+ if args.distributed:
+ train_sampler.set_epoch(epoch)
+
+ train_epoch(train_loader, model, opt, scheduler, epoch, criterion)
+
+ test_acc = test_epoch(test_loader, model, epoch, criterion)
+
+ if test_acc >= best_test_acc and main_process():
+ best_test_acc = test_acc
+ logger.info('Max Acc:%.6f' % best_test_acc)
+ torch.save(model.state_dict(), 'checkpoints/%s/best_model.t7' % args.exp_name) # save the best model
+
+
+def train_epoch(train_loader, model, opt, scheduler, epoch, criterion):
+ train_loss = 0.0
+ count = 0.0
+
+ batch_time = AverageMeter()
+ data_time = AverageMeter()
+ forward_time = AverageMeter()
+ backward_time = AverageMeter()
+ loss_meter = AverageMeter()
+ intersection_meter = AverageMeter()
+ union_meter = AverageMeter()
+ target_meter = AverageMeter()
+
+ model.train()
+ end = time.time()
+ max_iter = args.epochs * len(train_loader)
+
+ for ii, (data, label) in enumerate(train_loader):
+ data_time.update(time.time() - end)
+
+ data, label = data.cuda(non_blocking=True), label.cuda(non_blocking=True).squeeze(1)
+ data = data.permute(0, 2, 1)
+ batch_size = data.size(0)
+ end2 = time.time()
+ logits, loss = model(data, label, criterion)
+
+ forward_time.update(time.time() - end2)
+
+ preds = logits.max(dim=1)[1]
+
+ if not args.multiprocessing_distributed:
+ loss = torch.mean(loss)
+
+ end3 = time.time()
+ opt.zero_grad()
+ loss.backward() # the own loss of each process, backward by the optimizer belongs to this process
+ opt.step()
+ backward_time.update(time.time() - end3)
+
+ # Loss
+ if args.multiprocessing_distributed:
+ loss = loss * batch_size
+ _count = label.new_tensor([batch_size], dtype=torch.long).cuda(non_blocking=True) # b_size on one process
+ dist.all_reduce(loss), dist.all_reduce(_count) # obtain the sum of all xxx at all processes
+ n = _count.item()
+ loss = loss / n # avg loss across all processes
+
+ # then calculate loss same as without dist
+ count += batch_size
+ train_loss += loss.item() * batch_size
+
+ loss_meter.update(loss.item(), batch_size)
+ batch_time.update(time.time() - end)
+ end = time.time()
+
+ current_iter = epoch * len(train_loader) + ii + 1
+ remain_iter = max_iter - current_iter
+ remain_time = remain_iter * batch_time.avg
+ t_m, t_s = divmod(remain_time, 60)
+ t_h, t_m = divmod(t_m, 60)
+ remain_time = '{:02d}:{:02d}:{:02d}'.format(int(t_h), int(t_m), int(t_s))
+
+ if (ii + 1) % args.print_freq == 0 and main_process():
+ logger.info('Epoch: [{}/{}][{}/{}] '
+ 'Data {data_time.val:.3f} ({data_time.avg:.3f}) '
+ 'Batch {batch_time.val:.3f} ({batch_time.avg:.3f}) '
+ 'Forward {for_time.val:.3f} ({for_time.avg:.3f}) '
+ 'Backward {back_time.val:.3f} ({back_time.avg:.3f}) '
+ 'Remain {remain_time} '
+ 'Loss {loss_meter.val:.4f} '.format(epoch + 1, args.epochs, ii + 1, len(train_loader),
+ batch_time=batch_time,
+ data_time=data_time,
+ for_time = forward_time,
+ back_time = backward_time,
+ remain_time=remain_time,
+ loss_meter=loss_meter))
+
+ intersection, union, target = intersectionAndUnionGPU(preds, label, args.classes)
+ if args.multiprocessing_distributed: # obtain the sum of all tensors at all processes: all_reduce
+ dist.all_reduce(intersection), dist.all_reduce(union), dist.all_reduce(target)
+ intersection, union, target = intersection.cpu().numpy(), union.cpu().numpy(), target.cpu().numpy()
+ intersection_meter.update(intersection), union_meter.update(union), target_meter.update(target)
+
+ scheduler.step()
+
+ accuracy_class = intersection_meter.sum / (target_meter.sum + 1e-10)
+ mAcc = np.mean(accuracy_class)
+ allAcc = sum(intersection_meter.sum) / (sum(target_meter.sum) + 1e-10) # the first sum here is to sum the acc across all classes
+
+ outstr = 'Train %d, loss: %.6f, train acc: %.6f, ' \
+ 'train avg acc: %.6f' % (epoch + 1,
+ train_loss * 1.0 / count,
+ allAcc, mAcc)
+
+ if main_process():
+ logger.info(outstr)
+ # Write to tensorboard
+ writer.add_scalar('loss_train', train_loss * 1.0 / count, epoch + 1)
+ writer.add_scalar('mAcc_train', mAcc, epoch + 1)
+ writer.add_scalar('allAcc_train', allAcc, epoch + 1)
+
+
+def test_epoch(test_loader, model, epoch, criterion):
+ test_loss = 0.0
+ count = 0.0
+ model.eval()
+
+ intersection_meter = AverageMeter()
+ union_meter = AverageMeter()
+ target_meter = AverageMeter()
+
+ for data, label in test_loader:
+ data, label = data.cuda(non_blocking=True), label.cuda(non_blocking=True).squeeze(1)
+ data = data.permute(0, 2, 1)
+ batch_size = data.size(0)
+ logits = model(data)
+
+ # Loss
+ loss = criterion(logits, label) # here use model's output directly
+ if args.multiprocessing_distributed:
+ loss = loss * batch_size
+ _count = label.new_tensor([batch_size], dtype=torch.long).cuda(non_blocking=True)
+ dist.all_reduce(loss), dist.all_reduce(_count)
+ n = _count.item()
+ loss = loss / n
+ else:
+ loss = torch.mean(loss)
+
+ preds = logits.max(dim=1)[1]
+ count += batch_size
+ test_loss += loss.item() * batch_size
+
+ intersection, union, target = intersectionAndUnionGPU(preds, label, args.classes)
+ if args.multiprocessing_distributed:
+ dist.all_reduce(intersection), dist.all_reduce(union), dist.all_reduce(target)
+ intersection, union, target = intersection.cpu().numpy(), union.cpu().numpy(), target.cpu().numpy()
+ intersection_meter.update(intersection), union_meter.update(union), target_meter.update(target)
+
+ accuracy_class = intersection_meter.sum / (target_meter.sum + 1e-10)
+ mAcc = np.mean(accuracy_class)
+ allAcc = sum(intersection_meter.sum) / (sum(target_meter.sum) + 1e-10)
+
+ outstr = 'Test %d, loss: %.6f, test acc: %.6f, ' \
+ 'test avg acc: %.6f' % (epoch + 1,
+ test_loss * 1.0 / count,
+ allAcc,
+ mAcc)
+
+ if main_process():
+ logger.info(outstr)
+ # Write to tensorboard
+ writer.add_scalar('loss_test', test_loss * 1.0 / count, epoch + 1)
+ writer.add_scalar('mAcc_test', mAcc, epoch + 1)
+ writer.add_scalar('allAcc_test', allAcc, epoch + 1)
+
+ return allAcc
+
+
+def test(gpu, ngpus_per_node):
+ if main_process():
+ logger.info('<<<<<<<<<<<<<<<<< Start Evaluation <<<<<<<<<<<<<<<<<')
+
+ # ============= Model ===================
+ if args.arch == 'dgcnn':
+ from model.DGCNN_PAConv import PAConv
+ model = PAConv(args)
+ elif args.arch == 'pointnet':
+ from model.PointNet_PAConv import PAConv
+ model = PAConv(args)
+ else:
+ raise Exception("Not implemented")
+
+ if main_process():
+ logger.info(model)
+
+ if args.sync_bn:
+ assert args.distributed == True
+ model = nn.SyncBatchNorm.convert_sync_batchnorm(model)
+
+ if args.distributed:
+ torch.cuda.set_device(gpu)
+ args.batch_size = int(args.batch_size / ngpus_per_node)
+ args.test_batch_size = int(args.test_batch_size / ngpus_per_node)
+ args.workers = int((args.workers + ngpus_per_node - 1) / ngpus_per_node)
+ model = torch.nn.parallel.DistributedDataParallel(model.cuda(), device_ids=[gpu], find_unused_parameters=True)
+ else:
+ model = torch.nn.DataParallel(model.cuda())
+
+ state_dict = torch.load("checkpoints/%s/best_model.t7" % args.exp_name, map_location=torch.device('cpu'))
+
+ for k in state_dict.keys():
+ if 'module' not in k:
+ from collections import OrderedDict
+ new_state_dict = OrderedDict()
+ for k in state_dict:
+ new_state_dict['module.' + k] = state_dict[k]
+ state_dict = new_state_dict
+ break
+
+ model.load_state_dict(state_dict)
+
+ # Dataloader
+ test_data = ModelNet40(partition='test', num_points=args.num_points)
+ if args.distributed:
+ test_sampler = torch.utils.data.distributed.DistributedSampler(test_data)
+ else:
+ test_sampler = None
+ test_loader = torch.utils.data.DataLoader(test_data, batch_size=args.test_batch_size, shuffle=False,
+ num_workers=args.workers, pin_memory=True, sampler=test_sampler)
+
+ model.eval()
+
+ intersection_meter = AverageMeter()
+ union_meter = AverageMeter()
+ target_meter = AverageMeter()
+
+ for data, label in test_loader:
+
+ data, label = data.cuda(non_blocking=True), label.cuda(non_blocking=True).squeeze(1)
+ data = data.permute(0, 2, 1)
+ with torch.no_grad():
+ logits = model(data)
+ preds = logits.max(dim=1)[1]
+
+ intersection, union, target = intersectionAndUnionGPU(preds, label, args.classes)
+ if args.multiprocessing_distributed:
+ dist.all_reduce(intersection), dist.all_reduce(union), dist.all_reduce(target)
+ intersection, union, target = intersection.cpu().numpy(), union.cpu().numpy(), target.cpu().numpy()
+ intersection_meter.update(intersection), union_meter.update(union), target_meter.update(target)
+
+ accuracy_class = intersection_meter.sum / (target_meter.sum + 1e-10)
+ mAcc = np.mean(accuracy_class)
+ allAcc = sum(intersection_meter.sum) / (sum(target_meter.sum) + 1e-10)
+ if main_process():
+ logger.info('Test result: mAcc/allAcc {:.4f}/{:.4f}.'.format(mAcc, allAcc))
+ for i in range(args.classes):
+ logger.info('Class_{} Result: accuracy {:.4f}.'.format(i, accuracy_class[i]))
+ logger.info('<<<<<<<<<<<<<<<<< End Evaluation <<<<<<<<<<<<<<<<<')
+
+
+def main_worker(gpu, ngpus_per_node, argss):
+ global args
+ args = argss
+
+ if args.distributed:
+ if args.dist_url == "env://" and args.rank == -1:
+ args.rank = int(os.environ["RANK"])
+ if args.multiprocessing_distributed:
+ args.rank = args.rank * ngpus_per_node + gpu
+ dist.init_process_group(backend=args.dist_backend, init_method=args.dist_url, world_size=args.world_size,
+ rank=args.rank)
+
+ if main_process():
+ if not os.path.exists('checkpoints'):
+ os.makedirs('checkpoints')
+ if not os.path.exists('checkpoints/' + args.exp_name):
+ os.makedirs('checkpoints/' + args.exp_name)
+
+ if not args.eval: # backup the running files
+ os.system('cp main_ddp.py checkpoints' + '/' + args.exp_name + '/' + 'main_ddp.py.backup')
+ os.system('cp util/PAConv_util.py checkpoints' + '/' + args.exp_name + '/' + 'PAConv_util.py.backup')
+ os.system('cp util/data_util.py checkpoints' + '/' + args.exp_name + '/' + 'data_util.py.backup')
+ if args.arch == 'dgcnn':
+ os.system('cp model/DGCNN_PAConv.py checkpoints' + '/' + args.exp_name + '/' + 'DGCNN_PAConv.py.backup')
+ elif args.arch == 'pointnet':
+ os.system(
+ 'cp model/PointNet_PAConv.py checkpoints' + '/' + args.exp_name + '/' + 'PointNet_PAConv.py.backup')
+
+ global logger, writer
+ writer = SummaryWriter('checkpoints/' + args.exp_name)
+ logger = get_logger()
+ logger.info(args)
+
+ args.cuda = not args.no_cuda and torch.cuda.is_available()
+
+ assert not args.eval, "The all_reduce function of PyTorch DDP will ignore/repeat inputs " \
+ "(leading to the wrong result), " \
+ "please use main.py to test (avoid DDP) for getting the right result."
+ train(gpu, ngpus_per_node)
+
+
+if __name__ == "__main__":
+ args = get_parser()
+ args.gpu = [int(i) for i in os.environ['CUDA_VISIBLE_DEVICES'].split(',')]
+ if args.manual_seed is not None:
+ random.seed(args.manual_seed)
+ np.random.seed(args.manual_seed)
+ torch.manual_seed(args.manual_seed)
+ torch.cuda.manual_seed(args.manual_seed)
+ torch.cuda.manual_seed_all(args.manual_seed)
+ cudnn.benchmark = False
+ cudnn.deterministic = True
+ if args.dist_url == "env://" and args.world_size == -1:
+ args.world_size = int(os.environ["WORLD_SIZE"])
+ args.distributed = args.world_size > 1 or args.multiprocessing_distributed
+ args.ngpus_per_node = len(args.gpu)
+ if len(args.gpu) == 1:
+ args.sync_bn = False
+ args.distributed = False
+ args.multiprocessing_distributed = False
+ if args.multiprocessing_distributed:
+ port = find_free_port()
+ args.dist_url = f"tcp://127.0.0.1:{port}"
+ args.world_size = args.ngpus_per_node * args.world_size
+ mp.spawn(main_worker, nprocs=args.ngpus_per_node, args=(args.ngpus_per_node, args))
+ else:
+ main_worker(args.gpu, args.ngpus_per_node, args)
+
diff --git a/zoo/PAConv/obj_cls/model/DGCNN_PAConv.py b/zoo/PAConv/obj_cls/model/DGCNN_PAConv.py
new file mode 100644
index 0000000..adf1462
--- /dev/null
+++ b/zoo/PAConv/obj_cls/model/DGCNN_PAConv.py
@@ -0,0 +1,108 @@
+"""
+Embed PAConv into DGCNN
+"""
+
+import torch.nn as nn
+import torch
+import torch.nn.functional as F
+from util.PAConv_util import get_scorenet_input, knn, feat_trans_dgcnn, ScoreNet
+from cuda_lib.functional import assign_score_withk as assemble_dgcnn
+
+
+class PAConv(nn.Module):
+ def __init__(self, args):
+ super(PAConv, self).__init__()
+ self.args = args
+ self.k = args.get('k_neighbors', 20)
+ self.calc_scores = args.get('calc_scores', 'softmax')
+
+ self.m1, self.m2, self.m3, self.m4 = args.get('num_matrices', [8, 8, 8, 8])
+ self.scorenet1 = ScoreNet(6, self.m1, hidden_unit=[16])
+ self.scorenet2 = ScoreNet(6, self.m2, hidden_unit=[16])
+ self.scorenet3 = ScoreNet(6, self.m3, hidden_unit=[16])
+ self.scorenet4 = ScoreNet(6, self.m4, hidden_unit=[16])
+
+ i1 = 3 # channel dim of input_1st
+ o1 = i2 = 64 # channel dim of output_1st and input_2nd
+ o2 = i3 = 64 # channel dim of output_2st and input_3rd
+ o3 = i4 = 128 # channel dim of output_3rd and input_4th
+ o4 = 256 # channel dim of output_4th
+
+ tensor1 = nn.init.kaiming_normal_(torch.empty(self.m1, i1 * 2, o1), nonlinearity='relu') \
+ .permute(1, 0, 2).contiguous().view(i1 * 2, self.m1 * o1)
+ tensor2 = nn.init.kaiming_normal_(torch.empty(self.m2, i2 * 2, o2), nonlinearity='relu') \
+ .permute(1, 0, 2).contiguous().view(i2 * 2, self.m2 * o2)
+ tensor3 = nn.init.kaiming_normal_(torch.empty(self.m3, i3 * 2, o3), nonlinearity='relu') \
+ .permute(1, 0, 2).contiguous().view(i3 * 2, self.m3 * o3)
+ tensor4 = nn.init.kaiming_normal_(torch.empty(self.m4, i4 * 2, o4), nonlinearity='relu') \
+ .permute(1, 0, 2).contiguous().view(i4 * 2, self.m4 * o4)
+
+ # convolutional weight matrices in Weight Bank:
+ self.matrice1 = nn.Parameter(tensor1, requires_grad=True)
+ self.matrice2 = nn.Parameter(tensor2, requires_grad=True)
+ self.matrice3 = nn.Parameter(tensor3, requires_grad=True)
+ self.matrice4 = nn.Parameter(tensor4, requires_grad=True)
+
+ self.bn1 = nn.BatchNorm1d(o1, momentum=0.1)
+ self.bn2 = nn.BatchNorm1d(o2, momentum=0.1)
+ self.bn3 = nn.BatchNorm1d(o3, momentum=0.1)
+ self.bn4 = nn.BatchNorm1d(o4, momentum=0.1)
+ self.bn5 = nn.BatchNorm1d(1024, momentum=0.1)
+ self.conv5 = nn.Sequential(nn.Conv1d(512, 1024, kernel_size=1, bias=False),
+ self.bn5)
+
+ self.linear1 = nn.Linear(2048, 512, bias=False)
+ self.bn11 = nn.BatchNorm1d(512)
+ self.dp1 = nn.Dropout(p=args.dropout)
+ self.linear2 = nn.Linear(512, 256, bias=False)
+ self.bn22 = nn.BatchNorm1d(256)
+ self.dp2 = nn.Dropout(p=args.dropout)
+ self.linear3 = nn.Linear(256, 40)
+
+ def forward(self, x, label=None, criterion=None):
+ B, C, N = x.size()
+ idx, _ = knn(x, k=self.k) # different with DGCNN, the knn search is only in 3D space
+ xyz = get_scorenet_input(x, idx=idx, k=self.k) # ScoreNet input: 3D coord difference concat with coord: b,6,n,k
+
+ ##################
+ # replace all the DGCNN-EdgeConv with PAConv:
+ """CUDA implementation of PAConv: (presented in the supplementary material of the paper)"""
+ """feature transformation:"""
+ point1, center1 = feat_trans_dgcnn(point_input=x, kernel=self.matrice1, m=self.m1) # b,n,m1,o1
+ score1 = self.scorenet1(xyz, calc_scores=self.calc_scores, bias=0.5)
+ """assemble with scores:"""
+ point1 = assemble_dgcnn(score=score1, point_input=point1, center_input=center1, knn_idx=idx, aggregate='sum') # b,o1,n
+ point1 = F.relu(self.bn1(point1))
+
+ point2, center2 = feat_trans_dgcnn(point_input=point1, kernel=self.matrice2, m=self.m2)
+ score2 = self.scorenet2(xyz, calc_scores=self.calc_scores, bias=0.5)
+ point2 = assemble_dgcnn(score=score2, point_input=point2, center_input=center2, knn_idx=idx, aggregate='sum')
+ point2 = F.relu(self.bn2(point2))
+
+ point3, center3 = feat_trans_dgcnn(point_input=point2, kernel=self.matrice3, m=self.m3)
+ score3 = self.scorenet3(xyz, calc_scores=self.calc_scores, bias=0.5)
+ point3 = assemble_dgcnn(score=score3, point_input=point3, center_input=center3, knn_idx=idx, aggregate='sum')
+ point3 = F.relu(self.bn3(point3))
+
+ point4, center4 = feat_trans_dgcnn(point_input=point3, kernel=self.matrice4, m=self.m4)
+ score4 = self.scorenet4(xyz, calc_scores=self.calc_scores, bias=0.5)
+ point4 = assemble_dgcnn(score=score4, point_input=point4, center_input=center4, knn_idx=idx, aggregate='sum')
+ point4 = F.relu(self.bn4(point4))
+ ##################
+
+ point = torch.cat((point1, point2, point3, point4), dim=1)
+ point = F.relu(self.conv5(point))
+ point11 = F.adaptive_max_pool1d(point, 1).view(B, -1)
+ point22 = F.adaptive_avg_pool1d(point, 1).view(B, -1)
+ point = torch.cat((point11, point22), 1)
+
+ point = F.relu(self.bn11(self.linear1(point)))
+ point = self.dp1(point)
+ point = F.relu(self.bn22(self.linear2(point)))
+ point = self.dp2(point)
+ point = self.linear3(point)
+
+ if criterion is not None:
+ return point, criterion(point, label) # return output and loss
+ else:
+ return point
diff --git a/zoo/PAConv/obj_cls/model/PointNet_PAConv.py b/zoo/PAConv/obj_cls/model/PointNet_PAConv.py
new file mode 100644
index 0000000..39f7afb
--- /dev/null
+++ b/zoo/PAConv/obj_cls/model/PointNet_PAConv.py
@@ -0,0 +1,92 @@
+"""
+Embed PAConv into PointNet
+"""
+
+import torch
+import torch.nn as nn
+import torch.nn.functional as F
+from util.PAConv_util import get_scorenet_input, knn, feat_trans_pointnet, ScoreNet
+from cuda_lib.functional import assign_score_withk_halfkernel as assemble_pointnet
+
+
+class PAConv(nn.Module):
+ def __init__(self, args):
+ super(PAConv, self).__init__()
+ self.args = args
+ self.k = args.get('k_neighbors', 20)
+ self.calc_scores = args.get('calc_scores', 'softmax')
+
+ self.m2, self.m3, self.m4 = args.get('num_matrices', [8, 8, 8])
+ self.scorenet2 = ScoreNet(6, self.m2, hidden_unit=[16])
+ self.scorenet3 = ScoreNet(6, self.m3, hidden_unit=[16])
+ self.scorenet4 = ScoreNet(6, self.m4, hidden_unit=[16])
+
+ i2 = 64 # channel dim of output_1st and input_2nd
+ o2 = i3 = 64 # channel dim of output_2st and input_3rd
+ o3 = i4 = 64 # channel dim of output_3rd and input_4th
+ o4 = 128 # channel dim of output_4th and input_5th
+
+ tensor2 = nn.init.kaiming_normal_(torch.empty(self.m2, i2, o2), nonlinearity='relu') \
+ .permute(1, 0, 2).contiguous().view(i2, self.m2 * o2)
+ tensor3 = nn.init.kaiming_normal_(torch.empty(self.m3, i3, o3), nonlinearity='relu') \
+ .permute(1, 0, 2).contiguous().view(i3, self.m3 * o3)
+ tensor4 = nn.init.kaiming_normal_(torch.empty(self.m4, i4, o4), nonlinearity='relu') \
+ .permute(1, 0, 2).contiguous().view(i4, self.m4 * o4)
+
+ # convolutional weight matrices in Weight Bank:
+ self.matrice2 = nn.Parameter(tensor2, requires_grad=True)
+ self.matrice3 = nn.Parameter(tensor3, requires_grad=True)
+ self.matrice4 = nn.Parameter(tensor4, requires_grad=True)
+
+ self.bn1 = nn.BatchNorm1d(64)
+ self.bn2 = nn.BatchNorm1d(64)
+ self.bn3 = nn.BatchNorm1d(64)
+ self.bn4 = nn.BatchNorm1d(128)
+ self.bn5 = nn.BatchNorm1d(1024)
+
+ self.conv1 = nn.Conv1d(3, 64, kernel_size=1, bias=False)
+ self.conv5 = nn.Conv1d(128, 1024, kernel_size=1, bias=False)
+
+ self.linear1 = nn.Linear(1024, 512, bias=False)
+ self.bn6 = nn.BatchNorm1d(512)
+ self.dp1 = nn.Dropout(p=args.dropout)
+ self.linear2 = nn.Linear(512, 40)
+
+ def forward(self, x, label=None, criterion=None):
+ batch_size = x.size(0)
+ idx, _ = knn(x, k=self.k) # get the idx of knn in 3D space : b,n,k
+ xyz = get_scorenet_input(x, k=self.k, idx=idx) # ScoreNet input: 3D coord difference : b,6,n,k
+
+ x = self.conv1(x)
+ x = F.relu(self.bn1(x))
+ ##################
+ # replace the intermediate 3 MLP layers with PAConv:
+ """CUDA implementation of PAConv: (presented in the supplementary material of the paper)"""
+ """feature transformation:"""
+ x = feat_trans_pointnet(point_input=x, kernel=self.matrice2, m=self.m2) # b,n,m1,o1
+ score2 = self.scorenet2(xyz, calc_scores=self.calc_scores, bias=0)
+ """assemble with scores:"""
+ x = assemble_pointnet(score=score2, point_input=x, knn_idx=idx, aggregate='sum') # b,o1,n
+ x = F.relu(self.bn2(x))
+
+ x = feat_trans_pointnet(point_input=x, kernel=self.matrice3, m=self.m3)
+ score3 = self.scorenet3(xyz, calc_scores=self.calc_scores, bias=0)
+ x = assemble_pointnet(score=score3, point_input=x, knn_idx=idx, aggregate='sum')
+ x = F.relu(self.bn3(x))
+
+ x = feat_trans_pointnet(point_input=x, kernel=self.matrice4, m=self.m4)
+ score4 = self.scorenet4(xyz, calc_scores=self.calc_scores, bias=0)
+ x = assemble_pointnet(score=score4, point_input=x, knn_idx=idx, aggregate='sum')
+ x = F.relu(self.bn4(x))
+ ##################
+ x = self.conv5(x)
+ x = F.relu(self.bn5(x))
+
+ x = F.adaptive_max_pool1d(x, 1).view(batch_size, -1)
+ x = F.relu(self.bn6(self.linear1(x)))
+ x = self.dp1(x)
+ x = self.linear2(x)
+ if criterion is not None:
+ return x, criterion(x, label)
+ else:
+ return x
diff --git a/zoo/PAConv/obj_cls/util/PAConv_util.py b/zoo/PAConv/obj_cls/util/PAConv_util.py
new file mode 100755
index 0000000..7355ad0
--- /dev/null
+++ b/zoo/PAConv/obj_cls/util/PAConv_util.py
@@ -0,0 +1,115 @@
+import torch
+import torch.nn as nn
+import torch.nn.functional as F
+
+
+def knn(x, k):
+ B, _, N = x.size()
+ inner = -2 * torch.matmul(x.transpose(2, 1), x)
+ xx = torch.sum(x ** 2, dim=1, keepdim=True)
+ pairwise_distance = -xx - inner - xx.transpose(2, 1)
+
+ _, idx = pairwise_distance.topk(k=k, dim=-1) # (batch_size, num_points, k)
+
+ return idx, pairwise_distance
+
+
+def get_scorenet_input(x, idx, k):
+ """(neighbor, neighbor-center)"""
+ batch_size = x.size(0)
+ num_points = x.size(2)
+ x = x.view(batch_size, -1, num_points)
+
+ device = torch.device('cuda')
+
+ idx_base = torch.arange(0, batch_size, device=device).view(-1, 1, 1) * num_points
+
+ idx = idx + idx_base
+
+ idx = idx.view(-1)
+
+ _, num_dims, _ = x.size()
+
+ x = x.transpose(2, 1).contiguous()
+
+ neighbor = x.view(batch_size * num_points, -1)[idx, :]
+
+ neighbor = neighbor.view(batch_size, num_points, k, num_dims)
+
+ x = x.view(batch_size, num_points, 1, num_dims).repeat(1, 1, k, 1)
+
+ xyz = torch.cat((neighbor - x, neighbor), dim=3).permute(0, 3, 1, 2) # b,6,n,k
+
+ return xyz
+
+
+def feat_trans_dgcnn(point_input, kernel, m):
+ """transforming features using weight matrices"""
+ # following get_graph_feature in DGCNN: torch.cat((neighbor - center, neighbor), dim=3)
+ B, _, N = point_input.size() # b, 2cin, n
+ point_output = torch.matmul(point_input.permute(0, 2, 1).repeat(1, 1, 2), kernel).view(B, N, m, -1) # b,n,m,cout
+ center_output = torch.matmul(point_input.permute(0, 2, 1), kernel[:point_input.size(1)]).view(B, N, m, -1) # b,n,m,cout
+ return point_output, center_output
+
+
+def feat_trans_pointnet(point_input, kernel, m):
+ """transforming features using weight matrices"""
+ # no feature concat, following PointNet
+ B, _, N = point_input.size() # b, cin, n
+ point_output = torch.matmul(point_input.permute(0, 2, 1), kernel).view(B, N, m, -1) # b,n,m,cout
+ return point_output
+
+
+class ScoreNet(nn.Module):
+ def __init__(self, in_channel, out_channel, hidden_unit=[16], last_bn=False):
+ super(ScoreNet, self).__init__()
+ self.hidden_unit = hidden_unit
+ self.last_bn = last_bn
+ self.mlp_convs_hidden = nn.ModuleList()
+ self.mlp_bns_hidden = nn.ModuleList()
+
+ if hidden_unit is None or len(hidden_unit) == 0:
+ self.mlp_convs_nohidden = nn.Conv2d(in_channel, out_channel, 1, bias=not last_bn)
+ if self.last_bn:
+ self.mlp_bns_nohidden = nn.BatchNorm2d(out_channel)
+
+ else:
+ self.mlp_convs_hidden.append(nn.Conv2d(in_channel, hidden_unit[0], 1, bias=False)) # from in_channel to first hidden
+ self.mlp_bns_hidden.append(nn.BatchNorm2d(hidden_unit[0]))
+ for i in range(1, len(hidden_unit)): # from 2nd hidden to next hidden to last hidden
+ self.mlp_convs_hidden.append(nn.Conv2d(hidden_unit[i - 1], hidden_unit[i], 1, bias=False))
+ self.mlp_bns_hidden.append(nn.BatchNorm2d(hidden_unit[i]))
+ self.mlp_convs_hidden.append(nn.Conv2d(hidden_unit[-1], out_channel, 1, bias=not last_bn)) # from last hidden to out_channel
+ self.mlp_bns_hidden.append(nn.BatchNorm2d(out_channel))
+
+ def forward(self, xyz, calc_scores='softmax', bias=0):
+ B, _, N, K = xyz.size()
+ scores = xyz
+
+ if self.hidden_unit is None or len(self.hidden_unit) == 0:
+ if self.last_bn:
+ scores = self.mlp_bns_nohidden(self.mlp_convs_nohidden(scores))
+ else:
+ scores = self.mlp_convs_nohidden(scores)
+ else:
+ for i, conv in enumerate(self.mlp_convs_hidden):
+ if i == len(self.mlp_convs_hidden)-1: # if the output layer, no ReLU
+ if self.last_bn:
+ bn = self.mlp_bns_hidden[i]
+ scores = bn(conv(scores))
+ else:
+ scores = conv(scores)
+ else:
+ bn = self.mlp_bns_hidden[i]
+ scores = F.relu(bn(conv(scores)))
+
+ if calc_scores == 'softmax':
+ scores = F.softmax(scores, dim=1)+bias # B*m*N*K, where bias may bring larger gradient
+ elif calc_scores == 'sigmoid':
+ scores = torch.sigmoid(scores)+bias # B*m*N*K
+ else:
+ raise ValueError('Not Implemented!')
+
+ scores = scores.permute(0, 2, 3, 1) # B*N*K*m
+
+ return scores
\ No newline at end of file
diff --git a/zoo/PAConv/obj_cls/util/__init__.py b/zoo/PAConv/obj_cls/util/__init__.py
new file mode 100644
index 0000000..e69de29
diff --git a/zoo/PAConv/obj_cls/util/data_util.py b/zoo/PAConv/obj_cls/util/data_util.py
new file mode 100755
index 0000000..2690379
--- /dev/null
+++ b/zoo/PAConv/obj_cls/util/data_util.py
@@ -0,0 +1,62 @@
+import glob
+import h5py
+import numpy as np
+from torch.utils.data import Dataset
+
+
+def load_data(partition):
+ all_data = []
+ all_label = []
+ for h5_name in glob.glob('./data/modelnet40_ply_hdf5_2048/ply_data_%s*.h5' % partition):
+ f = h5py.File(h5_name, mode='r')
+ data = f['data'][:].astype('float32')
+ label = f['label'][:].astype('int64')
+ f.close()
+ all_data.append(data)
+ all_label.append(label)
+ all_data = np.concatenate(all_data, axis=0)
+ all_label = np.concatenate(all_label, axis=0)
+ return all_data, all_label
+
+
+def pc_normalize(pc):
+ centroid = np.mean(pc, axis=0)
+ pc = pc - centroid
+ m = np.max(np.sqrt(np.sum(pc ** 2, axis=1)))
+ pc = pc / m
+ return pc
+
+
+def translate_pointcloud(pointcloud):
+ xyz1 = np.random.uniform(low=2. / 3., high=3. / 2., size=[3])
+ xyz2 = np.random.uniform(low=-0.2, high=0.2, size=[3])
+
+ translated_pointcloud = np.add(np.multiply(pointcloud, xyz1), xyz2).astype('float32')
+ return translated_pointcloud
+
+
+def jitter_pointcloud(pointcloud, sigma=0.01, clip=0.02):
+ N, C = pointcloud.shape
+ pointcloud += np.clip(sigma * np.random.randn(N, C), -1 * clip, clip)
+ return pointcloud
+
+
+class ModelNet40(Dataset):
+ def __init__(self, num_points, partition='train', pt_norm=False):
+ self.data, self.label = load_data(partition)
+ self.num_points = num_points
+ self.partition = partition
+ self.pt_norm = pt_norm
+
+ def __getitem__(self, item):
+ pointcloud = self.data[item][:self.num_points]
+ label = self.label[item]
+ if self.partition == 'train':
+ if self.pt_norm:
+ pointcloud = pc_normalize(pointcloud)
+ pointcloud = translate_pointcloud(pointcloud)
+ np.random.shuffle(pointcloud) # shuffle the order of pts
+ return pointcloud, label
+
+ def __len__(self):
+ return self.data.shape[0]
diff --git a/zoo/PAConv/obj_cls/util/util.py b/zoo/PAConv/obj_cls/util/util.py
new file mode 100755
index 0000000..a0670de
--- /dev/null
+++ b/zoo/PAConv/obj_cls/util/util.py
@@ -0,0 +1,251 @@
+import torch
+import torch.nn.functional as F
+
+
+def cal_loss(pred, gold, smoothing=True):
+ ''' Calculate cross entropy loss, apply label smoothing if needed. '''
+
+ gold = gold.contiguous().view(-1) # gold is the groundtruth label in the dataloader
+
+ if smoothing:
+ eps = 0.2
+ n_class = pred.size(1) # the number of feature_dim of the output, which is output channels
+
+ one_hot = torch.zeros_like(pred).scatter(1, gold.view(-1, 1), 1)
+ one_hot = one_hot * (1 - eps) + (1 - one_hot) * eps / (n_class - 1)
+ log_prb = F.log_softmax(pred, dim=1)
+
+ loss = -(one_hot * log_prb).sum(dim=1).mean()
+ else:
+ loss = F.cross_entropy(pred, gold, reduction='mean')
+
+ return loss
+
+
+# create a file and write the text into it
+class IOStream():
+ def __init__(self, path):
+ self.f = open(path, 'a')
+
+ def cprint(self, text):
+ print(text)
+ self.f.write(text+'\n')
+ self.f.flush()
+
+ def close(self):
+ self.f.close()
+
+
+# -----------------------------------------------------------------------------
+# Functions for parsing args
+# -----------------------------------------------------------------------------
+import yaml
+import os
+from ast import literal_eval
+import copy
+
+
+class CfgNode(dict):
+ """
+ CfgNode represents an internal node in the configuration tree. It's a simple
+ dict-like container that allows for attribute-based access to keys.
+ """
+
+ def __init__(self, init_dict=None, key_list=None, new_allowed=False):
+ # Recursively convert nested dictionaries in init_dict into CfgNodes
+ init_dict = {} if init_dict is None else init_dict
+ key_list = [] if key_list is None else key_list
+ for k, v in init_dict.items():
+ if type(v) is dict:
+ # Convert dict to CfgNode
+ init_dict[k] = CfgNode(v, key_list=key_list + [k])
+ super(CfgNode, self).__init__(init_dict)
+
+ def __getattr__(self, name):
+ if name in self:
+ return self[name]
+ else:
+ raise AttributeError(name)
+
+ def __setattr__(self, name, value):
+ self[name] = value
+
+ def __str__(self):
+ def _indent(s_, num_spaces):
+ s = s_.split("\n")
+ if len(s) == 1:
+ return s_
+ first = s.pop(0)
+ s = [(num_spaces * " ") + line for line in s]
+ s = "\n".join(s)
+ s = first + "\n" + s
+ return s
+
+ r = ""
+ s = []
+ for k, v in sorted(self.items()):
+ seperator = "\n" if isinstance(v, CfgNode) else " "
+ attr_str = "{}:{}{}".format(str(k), seperator, str(v))
+ attr_str = _indent(attr_str, 2)
+ s.append(attr_str)
+ r += "\n".join(s)
+ return r
+
+ def __repr__(self):
+ return "{}({})".format(self.__class__.__name__, super(CfgNode, self).__repr__())
+
+
+def load_cfg_from_cfg_file(file):
+ cfg = {}
+ assert os.path.isfile(file) and file.endswith('.yaml'), \
+ '{} is not a yaml file'.format(file)
+
+ with open(file, 'r') as f:
+ cfg_from_file = yaml.safe_load(f)
+
+ for key in cfg_from_file:
+ for k, v in cfg_from_file[key].items():
+ cfg[k] = v
+
+ cfg = CfgNode(cfg)
+ return cfg
+
+
+def merge_cfg_from_list(cfg, cfg_list):
+ new_cfg = copy.deepcopy(cfg)
+ assert len(cfg_list) % 2 == 0
+ for full_key, v in zip(cfg_list[0::2], cfg_list[1::2]):
+ subkey = full_key.split('.')[-1]
+ assert subkey in cfg, 'Non-existent key: {}'.format(full_key)
+ value = _decode_cfg_value(v)
+ value = _check_and_coerce_cfg_value_type(
+ value, cfg[subkey], subkey, full_key
+ )
+ setattr(new_cfg, subkey, value)
+
+ return new_cfg
+
+
+def _decode_cfg_value(v):
+ """Decodes a raw config value (e.g., from a yaml config files or command
+ line argument) into a Python object.
+ """
+ # All remaining processing is only applied to strings
+ if not isinstance(v, str):
+ return v
+ # Try to interpret `v` as a:
+ # string, number, tuple, list, dict, boolean, or None
+ try:
+ v = literal_eval(v)
+ # The following two excepts allow v to pass through when it represents a
+ # string.
+ #
+ # Longer explanation:
+ # The type of v is always a string (before calling literal_eval), but
+ # sometimes it *represents* a string and other times a data structure, like
+ # a list. In the case that v represents a string, what we got back from the
+ # yaml parser is 'foo' *without quotes* (so, not '"foo"'). literal_eval is
+ # ok with '"foo"', but will raise a ValueError if given 'foo'. In other
+ # cases, like paths (v = 'foo/bar' and not v = '"foo/bar"'), literal_eval
+ # will raise a SyntaxError.
+ except ValueError:
+ pass
+ except SyntaxError:
+ pass
+ return v
+
+
+def _check_and_coerce_cfg_value_type(replacement, original, key, full_key):
+ """Checks that `replacement`, which is intended to replace `original` is of
+ the right type. The type is correct if it matches exactly or is one of a few
+ cases in which the type can be easily coerced.
+ """
+ original_type = type(original)
+ replacement_type = type(replacement)
+
+ # The types must match (with some exceptions)
+ if replacement_type == original_type:
+ return replacement
+
+ # Cast replacement from from_type to to_type if the replacement and original
+ # types match from_type and to_type
+ def conditional_cast(from_type, to_type):
+ if replacement_type == from_type and original_type == to_type:
+ return True, to_type(replacement)
+ else:
+ return False, None
+
+ # Conditionally casts
+ # list <-> tuple
+ casts = [(tuple, list), (list, tuple)]
+ # For py2: allow converting from str (bytes) to a unicode string
+ try:
+ casts.append((str, unicode)) # noqa: F821
+ except Exception:
+ pass
+
+ for (from_type, to_type) in casts:
+ converted, converted_value = conditional_cast(from_type, to_type)
+ if converted:
+ return converted_value
+
+ raise ValueError(
+ "Type mismatch ({} vs. {}) with values ({} vs. {}) for config "
+ "key: {}".format(
+ original_type, replacement_type, original, replacement, full_key
+ )
+ )
+
+
+def _assert_with_logging(cond, msg):
+ if not cond:
+ logger.debug(msg)
+ assert cond, msg
+
+
+def find_free_port():
+ import socket
+ sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
+ # Binding to port 0 will cause the OS to find an available port for us
+ sock.bind(("", 0))
+ port = sock.getsockname()[1]
+ sock.close()
+ # NOTE: there is still a chance the port could be taken by other processes.
+ return port
+
+
+class AverageMeter(object):
+ """Computes and stores the average and current value"""
+ def __init__(self):
+ self.reset()
+
+ def reset(self):
+ self.val = 0
+ self.avg = 0
+ self.sum = 0
+ self.count = 0
+
+ def update(self, val, n=1): # n is the batch sizeοΌ update all variables
+ self.val = val
+ self.sum += val * n
+ self.count += n
+ self.avg = self.sum / self.count
+
+
+def intersectionAndUnionGPU(output, target, K, ignore_index=255):
+ # 'K' classes, output and target sizes are N or N * L or N * H * W, each value in range 0 to K - 1.
+ assert (output.dim() in [1, 2, 3])
+ assert output.shape == target.shape
+ output = output.view(-1)
+ target = target.view(-1)
+ output[target == ignore_index] = ignore_index
+ intersection = output[output == target]
+ # assert len(intersection) > 0
+ if len(intersection) > 0:
+ area_intersection = torch.histc(intersection, bins=K, min=0, max=K-1)
+ else:
+ area_intersection = torch.zeros(K, dtype=output.dtype, device=output.device)
+ area_output = torch.histc(output, bins=K, min=0, max=K-1)
+ area_target = torch.histc(target, bins=K, min=0, max=K-1)
+ area_union = area_output + area_target - area_intersection
+ return area_intersection, area_union, area_target
diff --git a/zoo/PAConv/part_seg/README.md b/zoo/PAConv/part_seg/README.md
new file mode 100644
index 0000000..191096a
--- /dev/null
+++ b/zoo/PAConv/part_seg/README.md
@@ -0,0 +1,82 @@
+3D Shape Part Segmentation
+============================
+
+
+## Installation
+
+### Requirements
+* Hardware: GPUs to hold 14000MB
+* Software:
+ Linux (tested on Ubuntu 18.04)
+ PyTorch>=1.5.0, Python>=3, CUDA>=10.1, tensorboardX, tqdm, pyYaml
+
+### Dataset
+Download and unzip [ShapeNet Part](https://shapenet.cs.stanford.edu/media/shapenetcore_partanno_segmentation_benchmark_v0_normal.zip) (674M). Then symlink the paths to it as follows (you can alternatively modify the path [here](https://github.com/CVMI-Lab/PAConv/blob/main/part_seg/util/data_util.py#L20)):
+```
+mkdir -p data
+ln -s /path to shapenet part/shapenetcore_partanno_segmentation_benchmark_v0_normal data
+```
+
+## Usage
+
+* Build the CUDA kernel:
+
+ When you run the program for the first time, please wait a few moments for compiling the [cuda_lib](./cuda_lib) **automatically**.
+ Once the CUDA kernel is built, the program will skip this in the future running.
+
+
+* Train:
+
+ * Multi-thread training ([nn.DataParallel](https://pytorch.org/docs/stable/nn.html#dataparallel)) :
+
+ * `python main.py --config config/dgcnn_paconv_train.yaml` (Embed PAConv into [DGCNN](https://arxiv.org/abs/1801.07829))
+
+
+ * We also provide a fast **multi-process training** ([nn.parallel.DistributedDataParallel](https://pytorch.org/docs/stable/_modules/torch/nn/parallel/distributed.html), **recommended**) with official [nn.SyncBatchNorm](https://pytorch.org/docs/master/nn.html#torch.nn.SyncBatchNorm). Please also remind to specify the GPU ID:
+
+ * `CUDA_VISIBLE_DEVICES=x,x python main_ddp.py --config config/dgcnn_paconv_train.yaml` (Embed PAConv into [DGCNN](https://arxiv.org/abs/1801.07829))
+
+
+* Test:
+ * Download our [pretrained model](https://drive.google.com/drive/folders/1mIahmPMeCdX5WyUOGa0IrdEtBEzBUa67?usp=sharing) and put it under the [part_seg](/part_seg) folder.
+
+ * Run the voting evaluation script to test our pretrained models, after this voting you will get an instance mIoU of 86.1% if all things go right:
+
+ `python eval_voting.py --config config/dgcnn_paconv_test.yaml`
+
+ * You can also directly test our pretrained model without voting to get an instance mIoU of 86.0%:
+
+ `python main.py --config config/dgcnn_paconv_test.yaml`
+
+ * For full test after training the model:
+ * Specify the `eval` to `True` in your config file.
+
+ * Make sure to use **[main.py](main.py)** (main_ddp.py may lead to wrong result due to the repeating problem of all_reduce function in multi-process training) :
+
+ `python main.py --config config/your config file.yaml`
+
+ * You can choose to test the model with the best instance mIoU, class mIoU or accuracy, by specifying `model_type` to `insiou`, `clsiou` or `acc` in the test config file.
+
+* Visualization: [tensorboardX](https://github.com/lanpa/tensorboardX) incorporated for better visualization.
+
+ `tensorboard --logdir=checkpoints/exp_name`
+
+
+## Citation
+If you find the code or trained models useful, please consider citing:
+```
+@inproceedings{xu2021paconv,
+ title={PAConv: Position Adaptive Convolution with Dynamic Kernel Assembling on Point Clouds},
+ author={Xu, Mutian and Ding, Runyu and Zhao, Hengshuang and Qi, Xiaojuan},
+ booktitle={CVPR},
+ year={2021}
+}
+```
+
+## Contact
+
+You are welcome to send pull requests or share some ideas with us. Contact information: Mutian Xu (mino1018@outlook.com) or Runyu Ding (ryding@eee.hku.hk).
+
+
+## Acknowledgement
+This code is is partially borrowed from [DGCNN](https://github.com/WangYueFt/dgcnn) and [PointNet++](https://github.com/charlesq34/pointnet2).
diff --git a/zoo/PAConv/part_seg/config/dgcnn_paconv_test.yaml b/zoo/PAConv/part_seg/config/dgcnn_paconv_test.yaml
new file mode 100644
index 0000000..3a4d9bc
--- /dev/null
+++ b/zoo/PAConv/part_seg/config/dgcnn_paconv_test.yaml
@@ -0,0 +1,15 @@
+MODEL:
+ num_matrices: [8, 8, 8, 8]
+ k_neighbors: 30
+ calc_scores: softmax
+ hidden: [[16,16,16],[16,16,16],[16,16,16],[16,16,16]]
+
+TEST:
+ exp_name: dgcnn_paconv_test
+ num_points: 2048
+ test_batch_size: 16
+ workers: 6
+ no_cuda: False
+ eval: True
+ dropout: 0.4
+ model_type: insiou # choose to test the best insiou/clsiou/acc model
\ No newline at end of file
diff --git a/zoo/PAConv/part_seg/config/dgcnn_paconv_train.yaml b/zoo/PAConv/part_seg/config/dgcnn_paconv_train.yaml
new file mode 100644
index 0000000..741bd35
--- /dev/null
+++ b/zoo/PAConv/part_seg/config/dgcnn_paconv_train.yaml
@@ -0,0 +1,22 @@
+MODEL:
+ num_matrices: [8, 8, 8, 8]
+ k_neighbors: 30
+ calc_scores: softmax
+ hidden: [[16,16,16],[16,16,16],[16,16,16],[16,16,16]]
+
+TRAIN:
+ exp_name: dgcnn_paconv_train_3
+ num_points: 2048
+ batch_size: 128
+ test_batch_size: 16
+ workers: 6
+ epochs: 200
+ use_sgd: False # use sgd or adam
+ lr: 0.003
+ momentum: 0.9
+ scheduler: step
+ no_cuda: False
+ eval: False
+ dropout: 0.4
+ step: 40 # lr decay step
+ weight_decay: 0
\ No newline at end of file
diff --git a/zoo/PAConv/part_seg/cuda_lib/__init__.py b/zoo/PAConv/part_seg/cuda_lib/__init__.py
new file mode 100644
index 0000000..e69de29
diff --git a/zoo/PAConv/part_seg/cuda_lib/functional.py b/zoo/PAConv/part_seg/cuda_lib/functional.py
new file mode 100644
index 0000000..a494c3c
--- /dev/null
+++ b/zoo/PAConv/part_seg/cuda_lib/functional.py
@@ -0,0 +1,7 @@
+from . import functions
+
+
+def assign_score_withk(score, point_input, center_input, knn_idx, aggregate='sum'):
+ return functions.assign_score_withk(score, point_input, center_input, knn_idx, aggregate)
+
+
diff --git a/zoo/PAConv/part_seg/cuda_lib/functions/__init__.py b/zoo/PAConv/part_seg/cuda_lib/functions/__init__.py
new file mode 100644
index 0000000..df9d06f
--- /dev/null
+++ b/zoo/PAConv/part_seg/cuda_lib/functions/__init__.py
@@ -0,0 +1 @@
+from .assignscore import *
\ No newline at end of file
diff --git a/zoo/PAConv/part_seg/cuda_lib/functions/assignscore.py b/zoo/PAConv/part_seg/cuda_lib/functions/assignscore.py
new file mode 100644
index 0000000..7a19aab
--- /dev/null
+++ b/zoo/PAConv/part_seg/cuda_lib/functions/assignscore.py
@@ -0,0 +1,67 @@
+import torch
+from torch.autograd import Function
+
+from .. import src
+
+
+class AssignScoreWithK(Function):
+ @staticmethod
+ def forward(ctx, scores, points, centers, knn_idx, aggregate) : # -> torch.Tensor:
+ """
+ :param ctx
+ :param scores: (B, N, K, M)
+ :param points: (B, N, M, O)
+ :param centers: (B, N, M, O)
+ :param knn_idx: (B, N, K)
+ :param aggregate:
+ :return: output: (B, O, N)
+ """
+
+ agg = {'sum': 0, 'avg': 1, 'max': 2}
+
+ B, N, M, O = points.size()
+ K = scores.size(2)
+
+ output = torch.zeros([B, O, N], dtype=points.dtype, device=points.device)
+ output = output.contiguous()
+
+ src.gpu.assign_score_withk_forward_cuda(B, N, M, K, O, agg[aggregate],
+ points.contiguous(), centers.contiguous(),
+ scores.contiguous(), knn_idx.contiguous(),
+ output)
+
+ ctx.save_for_backward(output, points, centers, scores, knn_idx)
+ ctx.agg = agg[aggregate]
+
+ return output
+
+ @staticmethod
+ def backward(ctx, grad_out):
+ """
+
+ :param ctx:
+ :param grad_out: (B, O, N) tensor with gradients of ouputs
+ :return: grad_scores: (B, N, K, M) tensor with gradients of scores
+ :return: grad_points: (B, N, M, O) tensor with gradients of point features
+ :return: grad_centers: (B, N, M, O) tensor with gradients of center point features
+ """
+ output, points, centers, scores, knn_idx = ctx.saved_tensors
+
+ agg = ctx.agg
+
+ B, N, M, O = points.size()
+ K = scores.size(2)
+
+ grad_points = torch.zeros_like(points, dtype=points.dtype, device=points.device).contiguous()
+ grad_centers = torch.zeros_like(centers, dtype=points.dtype, device=points.device).contiguous()
+ grad_scores = torch.zeros_like(scores, dtype=scores.dtype, device=scores.device).contiguous()
+
+ src.gpu.assign_score_withk_backward_cuda(B, N, M, K, O, agg, grad_out.contiguous(),
+ points.contiguous(), centers.contiguous(),
+ scores.contiguous(), knn_idx.contiguous(),
+ grad_points, grad_centers, grad_scores)
+
+ return grad_scores, grad_points, grad_centers, None, None, None
+
+
+assign_score_withk = AssignScoreWithK.apply
diff --git a/zoo/PAConv/part_seg/cuda_lib/src/__init__.py b/zoo/PAConv/part_seg/cuda_lib/src/__init__.py
new file mode 100644
index 0000000..577c733
--- /dev/null
+++ b/zoo/PAConv/part_seg/cuda_lib/src/__init__.py
@@ -0,0 +1,14 @@
+import os
+import torch
+from torch.utils.cpp_extension import load
+
+cwd = os.path.dirname(os.path.realpath(__file__))
+gpu_path = os.path.join(cwd, 'gpu')
+
+if torch.cuda.is_available():
+ gpu = load('gpconv_cuda', [
+ os.path.join(gpu_path, 'operator.cpp'),
+ os.path.join(gpu_path, 'assign_score_withk_gpu.cu'),
+ ], build_directory=gpu_path, verbose=False)
+
+
diff --git a/zoo/PAConv/part_seg/cuda_lib/src/gpu/.ninja_deps b/zoo/PAConv/part_seg/cuda_lib/src/gpu/.ninja_deps
new file mode 100644
index 0000000..302c71c
Binary files /dev/null and b/zoo/PAConv/part_seg/cuda_lib/src/gpu/.ninja_deps differ
diff --git a/zoo/PAConv/part_seg/cuda_lib/src/gpu/.ninja_log b/zoo/PAConv/part_seg/cuda_lib/src/gpu/.ninja_log
new file mode 100644
index 0000000..948eec9
--- /dev/null
+++ b/zoo/PAConv/part_seg/cuda_lib/src/gpu/.ninja_log
@@ -0,0 +1,12 @@
+# ninja log v5
+1 29317 1644583731000000000 operator.o 37cbb46becd7c2fc
+1 18658 1645316842000000000 operator.o b2128e73136252e0
+1 57698 1645316881000000000 assign_score_withk_gpu.cuda.o d069ea62f0a82020
+57699 57922 1645316881000000000 gpconv_cuda.so 139c02a8b8ded873
+1 18986 1645318472000000000 operator.o d492d457e3e2bc38
+2 18284 1645318829000000000 operator.o b2128e73136252e0
+18286 18441 1645318829000000000 gpconv_cuda.so 139c02a8b8ded873
+3 23442 1645319819000000000 operator.o 37cbb46becd7c2fc
+3 19314 1645331459000000000 operator.o c3e903a2ae3c5191
+5 18662 1645331537000000000 operator.o b2128e73136252e0
+18665 18822 1645331538000000000 gpconv_cuda.so 139c02a8b8ded873
diff --git a/zoo/PAConv/part_seg/cuda_lib/src/gpu/assign_score_withk_gpu.cu b/zoo/PAConv/part_seg/cuda_lib/src/gpu/assign_score_withk_gpu.cu
new file mode 100644
index 0000000..d4ed602
--- /dev/null
+++ b/zoo/PAConv/part_seg/cuda_lib/src/gpu/assign_score_withk_gpu.cu
@@ -0,0 +1,220 @@
+#include
+#include
+#include
+#include
+#include "cuda_utils.h"
+#include "utils.h"
+
+
+// input: points(B,N,M,O), centers(B,N,M,O), scores(B,N,K,M), idx(B,N,K)
+// ouput: fout(B,O,N)
+// algo: fout(b,i,k,j) = s(b,i,k,m)*p(b,i,k,m,j) = s(b,i,k,m)*p(b,i(k),m,j)
+// i(k) = idx(b,i,k)
+// sum: fout(b,i,j) = fout(b,i,j) + s(b,i,k,m)*p(b,i,k,m,j)
+// avg: fout(b,i,j) = sum(fout(b,i,k,j)) / k
+// max: fout(b,i,j) = max(fout(b,i,k,j), sum(s(b,i,k,m)*p(b,i,k,m,j)))
+// k,m : sequential
+// b,n: parallel
+
+const int SUM = 0;
+const int AVG = 1;
+const int MAX = 2;
+
+#ifndef _CLOCK_T_DEFINED
+typedef long clock_t;
+#define _CLOCK_T_DEFINED
+#endif
+
+__global__ void assign_score_withk_forward_kernel(const int nthreads, const int B, const int N, const int M,
+ const int K, const int O, const int aggregate,
+ const float* points,
+ const float* centers,
+ const float* scores,
+ const long* knn_idx,
+ float* output) {
+
+ // clock_t start, finish;
+ // start = clock();
+
+ // ----- parallel loop for B, N and O ---------
+ for (long i = blockIdx.x * blockDim.x + threadIdx.x; i < nthreads; i += blockDim.x * gridDim.x) {
+ // ----- loop for K ---------
+ for (int k = 0; k < K; k++) {
+ // ------- loop for M ----------
+ for (int m = 0; m < M; m++) {
+ int b = (int)(i / (O * N));
+ int n = (int)(i % (O * N) / O);
+ // int k = (int)(i % (O * K * M) / (O * M));
+ // int m = (int)(i % (O * M) / O);
+ int o = (int)(i % O);
+ int kn = (int) knn_idx[b*K*N + n*K + k];
+ assert (b < B);
+ assert (kn < N);
+ assert (o < O);
+ assert (n < N);
+
+ if (aggregate == SUM) {
+ // feature concat
+ // output[b*N*O + o*N + n] += 2 * points[b*N*M*O + kn*M*O + m*O + o] * scores[b*N*K*M + n*K*M + k*M + m];
+ // output[b*N*O + o*N + n] -= points[b*N*M*O + n*M*O + m*O + o] * scores[b*N*K*M + n*K*M + k*M + m];
+ atomicAdd(output + b*N*O + o*N + n,
+ points[b*N*M*O + kn*M*O + m*O + o] * scores[b*N*K*M + n*K*M + k*M + m]
+ - centers[b*N*M*O + n*M*O + m*O + o] * scores[b*N*K*M + n*K*M + k*M + m]);
+ }
+ else if (aggregate == AVG) {
+ output[o*N + n] += 2 * points[kn*M*O + m*O + o] * scores[n*K*M + k*M + m] / K;
+ output[o*N + n] -= points[n*M*O + m*O + o] * scores[n*K*M + k*M + m] / K;
+ }
+ else if (aggregate == MAX) {
+ /***
+ float tmp = points[i*K*M + k*M + m] * scores[((int)(i/O))*K*M + k*M + m];
+ output[i] = tmp > output[i] ? tmp: output[i];
+ ***/
+ }
+ }
+ }
+ }
+
+ // finish = clock();
+ // printf("assign socre forward timeοΌblockid %d, %f\n", batch_idx, (double)(finish - start)/10000.0);
+}
+
+
+__global__ void assign_score_withk_backward_points_kernel(const int nthreads, const int B, const int N, const int M,
+ const int K, const int O, const int aggregate,
+ const float* grad_out,
+ const float* scores,
+ const long* knn_idx,
+ float* grad_points,
+ float* grad_centers) {
+
+ // clock_t start, finish;
+ // start = clock();
+
+ // ----- parallel loop for M, O ---------
+ for (long i = blockIdx.x * blockDim.x + threadIdx.x; i < nthreads; i += blockDim.x * gridDim.x) {
+ int b = (int)(i / (M * O));
+ int m = (int)(i % (M * O) / O);
+ int o = (int)(i % O);
+
+ // ----- loop for N,K ---------
+ for (int n = 0; n < N; n++) {
+ for (int k = 0; k < K; k++) {
+ int kn = knn_idx[b*N*K + n*K + k];
+ atomicAdd(grad_points + b*N*M*O + kn*M*O + m*O + o,
+ scores[b*N*K*M + n*K*M + k*M + m] * grad_out[b*O*N + o*N + n]);
+ atomicAdd(grad_centers + b*N*M*O + n*M*O + m*O + o,
+ - scores[b*N*K*M + n*K*M + k*M + m] * grad_out[b*O*N + o*N + n]);
+ //grad_points[b*N*M*O + kn*M*O + m*O + o] += 2 * scores[b*N*K*M + n*K*M + k*M + m] * grad_out[b*O*N + o*N + n];
+ //grad_points[b*N*M*O + n*M*O + m*O + o] -= scores[b*N*K*M + n*K*M + k*M + m] * grad_out[b*O*N + o*N + n];
+ }
+ }
+ }
+ // finish = clock();
+ // printf("assign socre backward time 1οΌblockid %d, %f\n", batch_idx, (double)(finish - start)/10000.0);
+
+}
+
+
+__global__ void assign_score_withk_backward_scores_kernel(const int nthreads, const int B, const int N, const int M,
+ const int K, const int O, const int aggregate,
+ const float* grad_out,
+ const float* points,
+ const float* centers,
+ const long* knn_idx,
+ float* grad_scores) {
+
+ // clock_t start, finish;
+ // start = clock();
+
+ // ----- parallel loop for N, K, M ---------
+ for (long i = blockIdx.x * blockDim.x + threadIdx.x; i < nthreads; i += blockDim.x * gridDim.x) {
+ // for (int i = index; i < N*K*M; i += stride) {
+ int b = (int)(i / (N * M * K));
+ int n = (int)(i % (N * M * K) / M / K);
+ int k = (int)(i % (M * K) / M);
+ int m = (int)(i % M);
+ int kn = knn_idx[b*N*K + n*K + k];
+
+ for(int o = 0; o < O; o++) {
+ atomicAdd(grad_scores + b*N*K*M + n*K*M + k*M + m,
+ (points[b*N*M*O + kn*M*O + m*O + o]
+ - centers[b*N*M*O + n*M*O + m*O + o])* grad_out[b*O*N + o*N + n]);
+ // grad_scores[b*N*K*M + n*K*M + k*M + m] += (2 * points[b*N*M*O + kn*M*O + m*O + o] - points[b*N*M*O + n*M*O + m*O + o])* grad_out[b*O*N + o*N + n];
+ }
+ }
+
+ // finish = clock();
+ // printf("assign socre backward time 2οΌblockid %d, %f\n", batch_idx, (double)(finish - start)/10000.0);
+}
+
+
+void assign_score_withk_forward_kernel_wrapper(int B, int N, int M, int K, int O, int aggregate,
+ const at::Tensor& points,
+ const at::Tensor& centers,
+ const at::Tensor& scores,
+ const at::Tensor& knn_idx,
+ at::Tensor& output) {
+ CHECK_CONTIGUOUS(points);
+ CHECK_CONTIGUOUS(centers);
+ CHECK_CONTIGUOUS(scores);
+ CHECK_CONTIGUOUS(knn_idx);
+ CHECK_CONTIGUOUS(output);
+
+ const float* points_data = points.data_ptr();
+ const float* centers_data = centers.data_ptr();
+ const float* scores_data = scores.data_ptr();
+ const long* knn_idx_data = knn_idx.data_ptr();
+ float* output_data = output.data_ptr();
+
+ int nthreads = B * N * O; // * K * M;
+
+ assign_score_withk_forward_kernel<<>>(
+ nthreads, B, N, M, K, O, aggregate, points_data, centers_data, scores_data, knn_idx_data, output_data);
+
+ CUDA_CHECK_ERRORS();
+
+}
+
+
+void assign_score_withk_backward_kernel_wrapper(int B, int N, int M, int K, int O, int aggregate,
+ const at::Tensor& grad_out,
+ const at::Tensor& points,
+ const at::Tensor& centers,
+ const at::Tensor& scores,
+ const at::Tensor& knn_idx,
+ at::Tensor& grad_points,
+ at::Tensor& grad_centers,
+ at::Tensor& grad_scores) {
+
+ CHECK_CONTIGUOUS(grad_out);
+ CHECK_CONTIGUOUS(scores);
+ CHECK_CONTIGUOUS(points);
+ CHECK_CONTIGUOUS(centers);
+ CHECK_CONTIGUOUS(knn_idx);
+ CHECK_CONTIGUOUS(grad_scores);
+ CHECK_CONTIGUOUS(grad_points);
+ CHECK_CONTIGUOUS(grad_centers);
+
+ const float* grad_out_data = grad_out.data_ptr();
+ const float* points_data = points.data_ptr();
+ const float* centers_data = centers.data_ptr();
+ const float* scores_data = scores.data_ptr();
+ const long* knn_idx_data = knn_idx.data_ptr();
+ float* grad_points_data = grad_points.data_ptr();
+ float* grad_centers_data = grad_centers.data_ptr();
+ float* grad_scores_data = grad_scores.data_ptr();
+
+ cudaStream_t stream = at::cuda::getCurrentCUDAStream();
+
+ int nthreads_1 = B * M * O;
+ int nthreads_2 = B * N * K * M;
+
+ assign_score_withk_backward_points_kernel<<>>(
+ nthreads_1, B, N, M, K, O, aggregate, grad_out_data, scores_data, knn_idx_data, grad_points_data, grad_centers_data);
+ assign_score_withk_backward_scores_kernel<<>>(
+ nthreads_2, B, N, M, K, O, aggregate, grad_out_data, points_data, centers_data, knn_idx_data, grad_scores_data);
+
+ CUDA_CHECK_ERRORS();
+
+}
diff --git a/zoo/PAConv/part_seg/cuda_lib/src/gpu/assign_score_withk_gpu.cuda.o b/zoo/PAConv/part_seg/cuda_lib/src/gpu/assign_score_withk_gpu.cuda.o
new file mode 100644
index 0000000..c0f0081
Binary files /dev/null and b/zoo/PAConv/part_seg/cuda_lib/src/gpu/assign_score_withk_gpu.cuda.o differ
diff --git a/zoo/PAConv/part_seg/cuda_lib/src/gpu/build.ninja b/zoo/PAConv/part_seg/cuda_lib/src/gpu/build.ninja
new file mode 100644
index 0000000..7f34217
--- /dev/null
+++ b/zoo/PAConv/part_seg/cuda_lib/src/gpu/build.ninja
@@ -0,0 +1,26 @@
+ninja_required_version = 1.3
+cxx = /mnt/lustre/share/gcc/gcc-5.3.0/bin/c++
+nvcc = /mnt/lustre/share/cuda-10.0/bin/nvcc
+
+cflags = -DTORCH_EXTENSION_NAME=gpconv_cuda -DTORCH_API_INCLUDE_EXTENSION_H -isystem /mnt/lustre/ldkong/anaconda3/envs/modelnetc/lib/python3.7/site-packages/torch/include -isystem /mnt/lustre/ldkong/anaconda3/envs/modelnetc/lib/python3.7/site-packages/torch/include/torch/csrc/api/include -isystem /mnt/lustre/ldkong/anaconda3/envs/modelnetc/lib/python3.7/site-packages/torch/include/TH -isystem /mnt/lustre/ldkong/anaconda3/envs/modelnetc/lib/python3.7/site-packages/torch/include/THC -isystem /mnt/lustre/share/cuda-10.0/include -isystem /mnt/lustre/ldkong/anaconda3/envs/modelnetc/include/python3.7m -D_GLIBCXX_USE_CXX11_ABI=0 -fPIC -std=c++11
+cuda_flags = -DTORCH_EXTENSION_NAME=gpconv_cuda -DTORCH_API_INCLUDE_EXTENSION_H -isystem /mnt/lustre/ldkong/anaconda3/envs/modelnetc/lib/python3.7/site-packages/torch/include -isystem /mnt/lustre/ldkong/anaconda3/envs/modelnetc/lib/python3.7/site-packages/torch/include/torch/csrc/api/include -isystem /mnt/lustre/ldkong/anaconda3/envs/modelnetc/lib/python3.7/site-packages/torch/include/TH -isystem /mnt/lustre/ldkong/anaconda3/envs/modelnetc/lib/python3.7/site-packages/torch/include/THC -isystem /mnt/lustre/share/cuda-10.0/include -isystem /mnt/lustre/ldkong/anaconda3/envs/modelnetc/include/python3.7m -D_GLIBCXX_USE_CXX11_ABI=0 -D__CUDA_NO_HALF_OPERATORS__ -D__CUDA_NO_HALF_CONVERSIONS__ -D__CUDA_NO_HALF2_OPERATORS__ --expt-relaxed-constexpr -gencode=arch=compute_70,code=sm_70 --compiler-options '-fPIC' -std=c++11
+ldflags = -shared -L/mnt/lustre/share/cuda-10.0/lib64 -lcudart
+
+rule compile
+ command = $cxx -MMD -MF $out.d $cflags -c $in -o $out
+ depfile = $out.d
+ deps = gcc
+
+rule cuda_compile
+ command = $nvcc $cuda_flags -c $in -o $out
+
+rule link
+ command = $cxx $in $ldflags -o $out
+
+build operator.o: compile /mnt/lustre/ldkong/models/PAConv/part_seg/cuda_lib/src/gpu/operator.cpp
+build assign_score_withk_gpu.cuda.o: cuda_compile /mnt/lustre/ldkong/models/PAConv/part_seg/cuda_lib/src/gpu/assign_score_withk_gpu.cu
+
+build gpconv_cuda.so: link operator.o assign_score_withk_gpu.cuda.o
+
+default gpconv_cuda.so
+
diff --git a/zoo/PAConv/part_seg/cuda_lib/src/gpu/cuda_utils.h b/zoo/PAConv/part_seg/cuda_lib/src/gpu/cuda_utils.h
new file mode 100644
index 0000000..dbce9e0
--- /dev/null
+++ b/zoo/PAConv/part_seg/cuda_lib/src/gpu/cuda_utils.h
@@ -0,0 +1,41 @@
+#ifndef _CUDA_UTILS_H
+#define _CUDA_UTILS_H
+
+#include
+#include
+#include
+
+#include
+#include
+
+#include
+
+#define TOTAL_THREADS 512
+
+inline int opt_n_threads(int work_size) {
+ const int pow_2 = std::log(static_cast(work_size)) / std::log(2.0);
+
+ return max(min(1 << pow_2, TOTAL_THREADS), 1);
+}
+
+inline dim3 opt_block_config(int x, int y) {
+ const int x_threads = opt_n_threads(x);
+ const int y_threads =
+ max(min(opt_n_threads(y), TOTAL_THREADS / x_threads), 1);
+ dim3 block_config(x_threads, y_threads, 1);
+
+ return block_config;
+}
+
+#define CUDA_CHECK_ERRORS() \
+ do { \
+ cudaError_t err = cudaGetLastError(); \
+ if (cudaSuccess != err) { \
+ fprintf(stderr, "CUDA kernel failed : %s\n%s at L:%d in %s\n", \
+ cudaGetErrorString(err), __PRETTY_FUNCTION__, __LINE__, \
+ __FILE__); \
+ exit(-1); \
+ } \
+ } while (0)
+
+#endif
diff --git a/zoo/PAConv/part_seg/cuda_lib/src/gpu/operator.cpp b/zoo/PAConv/part_seg/cuda_lib/src/gpu/operator.cpp
new file mode 100644
index 0000000..9ce001d
--- /dev/null
+++ b/zoo/PAConv/part_seg/cuda_lib/src/gpu/operator.cpp
@@ -0,0 +1,6 @@
+#include "operator.h"
+
+PYBIND11_MODULE(TORCH_EXTENSION_NAME, m) {
+ m.def("assign_score_withk_forward_cuda", &assign_score_withk_forward_kernel_wrapper, "Assign score kernel forward (GPU), save memory version");
+ m.def("assign_score_withk_backward_cuda", &assign_score_withk_backward_kernel_wrapper, "Assign score kernel backward (GPU), save memory version");
+}
diff --git a/zoo/PAConv/part_seg/cuda_lib/src/gpu/operator.h b/zoo/PAConv/part_seg/cuda_lib/src/gpu/operator.h
new file mode 100644
index 0000000..2aa5998
--- /dev/null
+++ b/zoo/PAConv/part_seg/cuda_lib/src/gpu/operator.h
@@ -0,0 +1,28 @@
+//
+// Created by Runyu Ding on 2020/8/12.
+//
+
+#ifndef _OPERATOR_H
+#define _OPERATOR_H
+
+#include
+
+void assign_score_withk_forward_kernel_wrapper(int B, int N, int M, int K, int O, int aggregate,
+ const at::Tensor& points,
+ const at::Tensor& centers,
+ const at::Tensor& scores,
+ const at::Tensor& knn_idx,
+ at::Tensor& output);
+
+void assign_score_withk_backward_kernel_wrapper(int B, int N, int M, int K, int O, int aggregate,
+ const at::Tensor& grad_out,
+ const at::Tensor& points,
+ const at::Tensor& centers,
+ const at::Tensor& scores,
+ const at::Tensor& knn_idx,
+ at::Tensor& grad_points,
+ at::Tensor& grad_centers,
+ at::Tensor& grad_scores);
+
+
+#endif
\ No newline at end of file
diff --git a/zoo/PAConv/part_seg/cuda_lib/src/gpu/operator.o b/zoo/PAConv/part_seg/cuda_lib/src/gpu/operator.o
new file mode 100644
index 0000000..37b21b3
Binary files /dev/null and b/zoo/PAConv/part_seg/cuda_lib/src/gpu/operator.o differ
diff --git a/zoo/PAConv/part_seg/cuda_lib/src/gpu/utils.h b/zoo/PAConv/part_seg/cuda_lib/src/gpu/utils.h
new file mode 100644
index 0000000..5f080ed
--- /dev/null
+++ b/zoo/PAConv/part_seg/cuda_lib/src/gpu/utils.h
@@ -0,0 +1,25 @@
+#pragma once
+#include
+#include
+
+#define CHECK_CUDA(x) \
+ do { \
+ AT_ASSERT(x.is_cuda(), #x " must be a CUDA tensor"); \
+ } while (0)
+
+#define CHECK_CONTIGUOUS(x) \
+ do { \
+ AT_ASSERT(x.is_contiguous(), #x " must be a contiguous tensor"); \
+ } while (0)
+
+#define CHECK_IS_INT(x) \
+ do { \
+ AT_ASSERT(x.scalar_type() == at::ScalarType::Int, \
+ #x " must be an int tensor"); \
+ } while (0)
+
+#define CHECK_IS_FLOAT(x) \
+ do { \
+ AT_ASSERT(x.scalar_type() == at::ScalarType::Float, \
+ #x " must be a float tensor"); \
+ } while (0)
diff --git a/zoo/PAConv/part_seg/debug.py b/zoo/PAConv/part_seg/debug.py
new file mode 100644
index 0000000..0cc4107
--- /dev/null
+++ b/zoo/PAConv/part_seg/debug.py
@@ -0,0 +1,19 @@
+from util.data_util import PartNormalDataset, ShapeNetC
+from torch.utils.data import DataLoader
+from torch.autograd import Variable
+
+
+
+# test_data = PartNormalDataset(npoints=2048, split='test', normalize=False)
+test_data = ShapeNetC(partition='shapenet-c', sub='clean', class_choice=None)
+print("The number of test data is: {}".format(len(test_data)))
+
+test_loader = DataLoader(test_data, batch_size=16, shuffle=False, num_workers=6, drop_last=False)
+
+
+for batch_id, (points, label, target, norm_plt) in enumerate(test_loader):
+ batch_size, num_point, _ = points.size()
+ points, label, target, norm_plt = Variable(points.float()), Variable(label.long()), Variable(target.long()), Variable(norm_plt.float())
+ points = points.transpose(2, 1)
+ norm_plt = norm_plt.transpose(2, 1)
+ points, label, target, norm_plt = points.cuda(non_blocking=True), label.squeeze().cuda(non_blocking=True), target.cuda(non_blocking=True), norm_plt.cuda(non_blocking=True)
diff --git a/zoo/PAConv/part_seg/eval_voting.py b/zoo/PAConv/part_seg/eval_voting.py
new file mode 100644
index 0000000..796e74e
--- /dev/null
+++ b/zoo/PAConv/part_seg/eval_voting.py
@@ -0,0 +1,186 @@
+from __future__ import print_function
+import os
+import argparse
+import torch
+from util.data_util import PartNormalDataset
+from model.DGCNN_PAConv_vote import PAConv
+import numpy as np
+from torch.utils.data import DataLoader
+from util.util import to_categorical, compute_overall_iou, load_cfg_from_cfg_file, merge_cfg_from_list, IOStream
+from tqdm import tqdm
+from collections import defaultdict
+from torch.autograd import Variable
+import torch.nn.functional as F
+
+classes_str =['aero','bag','cap','car','chair','ear','guitar','knife','lamp','lapt','moto','mug','Pistol','rock','stake','table']
+
+
+class PointcloudScale(object):
+ def __init__(self, scale_low=2. / 3., scale_high=3. / 2.):
+ self.scale_low = scale_low
+ self.scale_high = scale_high
+
+ def __call__(self, pc):
+ bsize = pc.size()[0]
+ for i in range(bsize):
+ xyz = np.random.uniform(low=self.scale_low, high=self.scale_high, size=[3])
+ pc[i, :, 0:3] = torch.mul(pc[i, :, 0:3], torch.from_numpy(xyz).float().cuda())
+ return pc
+
+
+def get_parser():
+ parser = argparse.ArgumentParser(description='3D Shape Part Segmentation')
+ parser.add_argument('--config', type=str, default='dgcnn_paconv_test.yaml', help='config file')
+ parser.add_argument('opts', help='see config/dgcnn_paconv_test.yaml for all options', default=None, nargs=argparse.REMAINDER)
+ args = parser.parse_args()
+ assert args.config is not None
+ cfg = load_cfg_from_cfg_file(args.config)
+ if args.opts is not None:
+ cfg = merge_cfg_from_list(cfg, args.opts)
+
+ cfg['manual_seed'] = cfg.get('manual_seed', 0)
+ cfg['workers'] = cfg.get('workers', 6)
+ return cfg
+
+
+def _init_():
+ if not os.path.exists('checkpoints'):
+ os.makedirs('checkpoints')
+ if not os.path.exists('checkpoints/' + args.exp_name):
+ os.makedirs('checkpoints/' + args.exp_name)
+
+ # backup the running files:
+ os.system('cp eval_voting.py checkpoints' + '/' + args.exp_name + '/' + 'eval_voting.py.backup')
+
+
+def test(args, io):
+ # Try to load models
+ num_part = 50
+ device = torch.device("cuda" if args.cuda else "cpu")
+
+ model = PAConv(args, num_part).to(device)
+ io.cprint(str(model))
+
+ from collections import OrderedDict
+ state_dict = torch.load("checkpoints/%s/best_insiou_model.pth" % args.exp_name,
+ map_location=torch.device('cpu'))['model']
+
+ new_state_dict = OrderedDict()
+ for layer in state_dict:
+ new_state_dict[layer.replace('module.', '')] = state_dict[layer]
+ model.load_state_dict(new_state_dict)
+
+ # Dataloader
+ test_data = PartNormalDataset(npoints=2048, split='test', normalize=False)
+ print("The number of test data is:%d", len(test_data))
+
+ test_loader = DataLoader(test_data, batch_size=args.test_batch_size, shuffle=False, num_workers=args.workers,
+ drop_last=False)
+
+ NUM_PEPEAT = 100
+ NUM_VOTE = 10
+ global_Class_mIoU, global_Inst_mIoU = 0, 0
+ global_total_per_cat_iou = np.zeros((16)).astype(np.float32)
+ num_part = 50
+ num_classes = 16
+ pointscale = PointcloudScale(scale_low=0.87, scale_high=1.15)
+
+ model.eval()
+
+ for i in range(NUM_PEPEAT):
+
+ metrics = defaultdict(lambda: list())
+ shape_ious = []
+ total_per_cat_iou = np.zeros((16)).astype(np.float32)
+ total_per_cat_seen = np.zeros((16)).astype(np.int32)
+
+ for batch_id, (points, label, target, norm_plt) in tqdm(enumerate(test_loader), total=len(test_loader),
+ smoothing=0.9):
+ batch_size, num_point, _ = points.size()
+ points, label, target, norm_plt = Variable(points.float()), Variable(label.long()), Variable(
+ target.long()), Variable(norm_plt.float())
+ # points = points.transpose(2, 1)
+ norm_plt = norm_plt.transpose(2, 1)
+ points, label, target, norm_plt = points.cuda(non_blocking=True), label.squeeze().cuda(
+ non_blocking=True), target.cuda(non_blocking=True), norm_plt.cuda(non_blocking=True)
+
+ seg_pred = 0
+ new_points = Variable(torch.zeros(points.size()[0], points.size()[1], points.size()[2]).cuda(),
+ volatile=True)
+
+ for v in range(NUM_VOTE):
+ if v > 0:
+ new_points.data = pointscale(points.data)
+ with torch.no_grad():
+ seg_pred += F.softmax(
+ model(points.contiguous().transpose(2, 1), new_points.contiguous().transpose(2, 1),
+ norm_plt, to_categorical(label, num_classes)), dim=2) # xyz,x: only scale feature input
+ seg_pred /= NUM_VOTE
+
+ # instance iou without considering the class average at each batch_size:
+ batch_shapeious = compute_overall_iou(seg_pred, target, num_part) # [b]
+ shape_ious += batch_shapeious # iou +=, equals to .append
+
+ # per category iou at each batch_size:
+ for shape_idx in range(seg_pred.size(0)): # sample_idx
+ cur_gt_label = label[shape_idx] # label[sample_idx]
+ total_per_cat_iou[cur_gt_label] += batch_shapeious[shape_idx]
+ total_per_cat_seen[cur_gt_label] += 1
+
+ # accuracy:
+ seg_pred = seg_pred.contiguous().view(-1, num_part)
+ target = target.view(-1, 1)[:, 0]
+ pred_choice = seg_pred.data.max(1)[1]
+ correct = pred_choice.eq(target.data).cpu().sum()
+ metrics['accuracy'].append(correct.item() / (batch_size * num_point))
+
+ metrics['shape_avg_iou'] = np.mean(shape_ious)
+ for cat_idx in range(16):
+ if total_per_cat_seen[cat_idx] > 0:
+ total_per_cat_iou[cat_idx] = total_per_cat_iou[cat_idx] / total_per_cat_seen[cat_idx]
+
+ print('\n------ Repeat %3d ------' % (i + 1))
+
+ # First we need to calculate the iou of each class and the avg class iou:
+ class_iou = 0
+ for cat_idx in range(16):
+ class_iou += total_per_cat_iou[cat_idx]
+ io.cprint(classes_str[cat_idx] + ' iou: ' + str(total_per_cat_iou[cat_idx])) # print the iou of each class
+ avg_class_iou = class_iou / 16
+ outstr = 'Test :: test class mIOU: %f, test instance mIOU: %f' % (avg_class_iou, metrics['shape_avg_iou'])
+ io.cprint(outstr)
+
+ if avg_class_iou > global_Class_mIoU:
+ global_Class_mIoU = avg_class_iou
+ global_total_per_cat_iou = total_per_cat_iou
+
+ if metrics['shape_avg_iou'] > global_Inst_mIoU:
+ global_Inst_mIoU = metrics['shape_avg_iou']
+
+ # final avg print:
+ final_out_str = 'Best voting result :: test class mIOU: %f, test instance mIOU: %f' % (global_Class_mIoU, global_Inst_mIoU)
+ io.cprint(final_out_str)
+
+ # final per cat print:
+ for cat_idx in range(16):
+ io.cprint(classes_str[cat_idx] + ' iou: ' + str(global_total_per_cat_iou[cat_idx])) # print iou of each class
+
+
+if __name__ == "__main__":
+ args = get_parser()
+ _init_()
+
+ io = IOStream('checkpoints/' + args.exp_name + '/%s_voting.log' % (args.exp_name))
+ io.cprint(str(args))
+
+ args.cuda = not args.no_cuda and torch.cuda.is_available()
+ torch.manual_seed(args.manual_seed)
+ if args.cuda:
+ io.cprint(
+ 'Using GPU : ' + str(torch.cuda.current_device()) + ' from ' + str(torch.cuda.device_count()) + ' devices')
+ torch.cuda.manual_seed(args.manual_seed)
+ else:
+ io.cprint('Using CPU')
+
+ test(args, io)
+
diff --git a/zoo/PAConv/part_seg/main.py b/zoo/PAConv/part_seg/main.py
new file mode 100644
index 0000000..ac443ab
--- /dev/null
+++ b/zoo/PAConv/part_seg/main.py
@@ -0,0 +1,426 @@
+from __future__ import print_function
+import os
+import argparse
+import torch
+import torch.optim as optim
+from torch.optim.lr_scheduler import CosineAnnealingLR, StepLR
+from util.data_util import PartNormalDataset
+import torch.nn.functional as F
+import torch.nn as nn
+from model.DGCNN_PAConv import PAConv
+import numpy as np
+from torch.utils.data import DataLoader
+from util.util import to_categorical, compute_overall_iou, load_cfg_from_cfg_file, merge_cfg_from_list, IOStream
+from tqdm import tqdm
+from tensorboardX import SummaryWriter
+from collections import defaultdict
+from torch.autograd import Variable
+import random
+
+
+classes_str = ['aero','bag','cap','car','chair','ear','guitar','knife','lamp','lapt','moto','mug','Pistol','rock','stake','table']
+
+
+def get_parser():
+ parser = argparse.ArgumentParser(description='3D Shape Part Segmentation')
+ parser.add_argument('--config', type=str, default='dgcnn_paconv_train.yaml', help='config file')
+ parser.add_argument('opts', help='see config/dgcnn_paconv_train.yaml for all options', default=None, nargs=argparse.REMAINDER)
+ args = parser.parse_args()
+ assert args.config is not None
+ cfg = load_cfg_from_cfg_file(args.config)
+ if args.opts is not None:
+ cfg = merge_cfg_from_list(cfg, args.opts)
+
+ cfg['manual_seed'] = cfg.get('manual_seed', 0)
+ cfg['workers'] = cfg.get('workers', 6)
+ return cfg
+
+
+def _init_():
+ if not os.path.exists('checkpoints'):
+ os.makedirs('checkpoints')
+ if not os.path.exists('checkpoints/' + args.exp_name):
+ os.makedirs('checkpoints/' + args.exp_name)
+
+ if not args.eval: # backup the running files
+ os.system('cp main.py checkpoints' + '/' + args.exp_name + '/' + 'main.py.backup')
+ os.system('cp util/PAConv_util.py checkpoints' + '/' + args.exp_name + '/' + 'PAConv_util.py.backup')
+ os.system('cp util/data_util.py checkpoints' + '/' + args.exp_name + '/' + 'data_util.py.backup')
+ os.system('cp DGCNN_PAConv.py checkpoints' + '/' + args.exp_name + '/' + 'DGCNN_PAConv.py.backup')
+
+ global writer
+ writer = SummaryWriter('checkpoints/' + args.exp_name)
+
+
+def weight_init(m):
+ if isinstance(m, torch.nn.Linear):
+ torch.nn.init.kaiming_normal_(m.weight)
+ if m.bias is not None:
+ torch.nn.init.constant_(m.bias, 0)
+ elif isinstance(m, torch.nn.Conv2d):
+ torch.nn.init.kaiming_normal_(m.weight)
+ if m.bias is not None:
+ torch.nn.init.constant_(m.bias, 0)
+ elif isinstance(m, torch.nn.Conv1d):
+ torch.nn.init.kaiming_normal_(m.weight)
+ if m.bias is not None:
+ torch.nn.init.constant_(m.bias, 0)
+ elif isinstance(m, torch.nn.BatchNorm2d):
+ torch.nn.init.constant_(m.weight, 1)
+ torch.nn.init.constant_(m.bias, 0)
+ elif isinstance(m, torch.nn.BatchNorm1d):
+ torch.nn.init.constant_(m.weight, 1)
+ torch.nn.init.constant_(m.bias, 0)
+
+
+def train(args, io):
+
+ # ============= Model ===================
+ num_part = 50
+ device = torch.device("cuda" if args.cuda else "cpu")
+
+ model = PAConv(args, num_part).to(device)
+ io.cprint(str(model))
+
+ model.apply(weight_init)
+ model = nn.DataParallel(model)
+ print("Let's use", torch.cuda.device_count(), "GPUs!")
+
+ '''Use Pretrain or not'''
+ if args.get('pretrain', False):
+ state_dict = torch.load("checkpoints/%s/best_insiou_model.pth" % args.exp_name, map_location=torch.device('cpu'))['model']
+ for k in state_dict.keys():
+ if 'module' not in k:
+ from collections import OrderedDict
+ new_state_dict = OrderedDict()
+ for k in state_dict:
+ new_state_dict['module.' + k] = state_dict[k]
+ state_dict = new_state_dict
+ break
+ model.load_state_dict(state_dict)
+
+ print("Using pretrained model...")
+ print(torch.load("checkpoints/%s/best_insiou_model.pth" % args.exp_name).keys())
+ else:
+ print("Training from scratch...")
+
+ # =========== Dataloader =================
+ train_data = PartNormalDataset(npoints=2048, split='trainval', normalize=False)
+ print("The number of training data is:%d", len(train_data))
+
+ test_data = PartNormalDataset(npoints=2048, split='test', normalize=False)
+ print("The number of test data is:%d", len(test_data))
+
+ train_loader = DataLoader(train_data, batch_size=args.batch_size, shuffle=True, num_workers=args.workers,
+ drop_last=True)
+
+ test_loader = DataLoader(test_data, batch_size=args.test_batch_size, shuffle=False, num_workers=args.workers,
+ drop_last=False)
+
+ # ============= Optimizer ================
+ if args.use_sgd:
+ print("Use SGD")
+ opt = optim.SGD(model.parameters(), lr=args.lr, momentum=args.momentum, weight_decay=args.weight_decay)
+ else:
+ print("Use Adam")
+ opt = optim.Adam(model.parameters(), lr=args.lr, betas=(0.9, 0.999), eps=1e-08, weight_decay=args.weight_decay)
+
+ if args.scheduler == 'cos':
+ print("Use CosLR")
+ scheduler = CosineAnnealingLR(opt, args.epochs, eta_min=args.lr / 100)
+ else:
+ print("Use StepLR")
+ scheduler = StepLR(opt, step_size=args.step, gamma=0.5)
+
+ # ============= Training =================
+ best_acc = 0
+ best_class_iou = 0
+ best_instance_iou = 0
+ num_part = 50
+ num_classes = 16
+
+ for epoch in range(args.epochs):
+
+ train_epoch(train_loader, model, opt, scheduler, epoch, num_part, num_classes, io)
+
+ test_metrics, total_per_cat_iou = test_epoch(test_loader, model, epoch, num_part, num_classes, io)
+
+ # 1. when get the best accuracy, save the model:
+ if test_metrics['accuracy'] > best_acc:
+ best_acc = test_metrics['accuracy']
+ io.cprint('Max Acc:%.5f' % best_acc)
+ state = {
+ 'model': model.module.state_dict() if torch.cuda.device_count() > 1 else model.state_dict(),
+ 'optimizer': opt.state_dict(), 'epoch': epoch, 'test_acc': best_acc}
+ torch.save(state, 'checkpoints/%s/best_acc_model.pth' % args.exp_name)
+
+ # 2. when get the best instance_iou, save the model:
+ if test_metrics['shape_avg_iou'] > best_instance_iou:
+ best_instance_iou = test_metrics['shape_avg_iou']
+ io.cprint('Max instance iou:%.5f' % best_instance_iou)
+ state = {
+ 'model': model.module.state_dict() if torch.cuda.device_count() > 1 else model.state_dict(),
+ 'optimizer': opt.state_dict(), 'epoch': epoch, 'test_instance_iou': best_instance_iou}
+ torch.save(state, 'checkpoints/%s/best_insiou_model.pth' % args.exp_name)
+
+ # 3. when get the best class_iou, save the model:
+ # first we need to calculate the average per-class iou
+ class_iou = 0
+ for cat_idx in range(16):
+ class_iou += total_per_cat_iou[cat_idx]
+ avg_class_iou = class_iou / 16
+ if avg_class_iou > best_class_iou:
+ best_class_iou = avg_class_iou
+ # print the iou of each class:
+ for cat_idx in range(16):
+ io.cprint(classes_str[cat_idx] + ' iou: ' + str(total_per_cat_iou[cat_idx]))
+ io.cprint('Max class iou:%.5f' % best_class_iou)
+ state = {
+ 'model': model.module.state_dict() if torch.cuda.device_count() > 1 else model.state_dict(),
+ 'optimizer': opt.state_dict(), 'epoch': epoch, 'test_class_iou': best_class_iou}
+ torch.save(state, 'checkpoints/%s/best_clsiou_model.pth' % args.exp_name)
+
+ # report best acc, ins_iou, cls_iou
+ io.cprint('Final Max Acc:%.5f' % best_acc)
+ io.cprint('Final Max instance iou:%.5f' % best_instance_iou)
+ io.cprint('Final Max class iou:%.5f' % best_class_iou)
+ # save last model
+ state = {
+ 'model': model.module.state_dict() if torch.cuda.device_count() > 1 else model.state_dict(),
+ 'optimizer': opt.state_dict(), 'epoch': args.epochs - 1, 'test_iou': best_instance_iou}
+ torch.save(state, 'checkpoints/%s/model_ep%d.pth' % (args.exp_name, args.epochs))
+
+
+def train_epoch(train_loader, model, opt, scheduler, epoch, num_part, num_classes, io):
+ train_loss = 0.0
+ count = 0.0
+ accuracy = []
+ shape_ious = 0.0
+ metrics = defaultdict(lambda: list())
+ model.train()
+
+ for batch_id, (points, label, target) in tqdm(enumerate(train_loader), total=len(train_loader), smoothing=0.9):
+ batch_size, num_point, _ = points.size()
+ points, label, target = Variable(points.float()), Variable(label.long()), Variable(target.long())
+ points, label, target = points.cuda(non_blocking=True), label.squeeze(1).cuda(non_blocking=True), target.cuda(non_blocking=True)
+ points = points.permute(0, 2, 1) ###
+ # target: b,n
+ seg_pred, loss = model(points, to_categorical(label, num_classes), target) # seg_pred: b,n,50
+ seg_pred = seg_pred.contiguous() ###
+
+ # instance iou without considering the class average at each batch_size:
+ batch_shapeious = compute_overall_iou(seg_pred, target, num_part) # list of of current batch_iou:[iou1,iou2,...,iou#b_size]
+ # total iou of current batch in each process:
+ batch_shapeious = seg_pred.new_tensor([np.sum(batch_shapeious)], dtype=torch.float64) # same device with seg_pred!!!
+
+ # Loss backward
+ loss = torch.mean(loss)
+ opt.zero_grad()
+ loss.backward()
+ opt.step()
+
+ # accuracy
+ seg_pred = seg_pred.contiguous().view(-1, num_part) # b*n,50
+ target = target.view(-1, 1)[:, 0] # b*n
+ pred_choice = seg_pred.contiguous().data.max(1)[1] # b*n
+ correct = pred_choice.eq(target.contiguous().data).sum() # torch.int64: total number of correct-predict pts
+
+ # sum
+ shape_ious += batch_shapeious.item() # count the sum of ious in each iteration
+ count += batch_size # count the total number of samples in each iteration
+ train_loss += loss.item() * batch_size
+ accuracy.append(correct.item()/(batch_size * num_point)) # append the accuracy of each iteration
+
+ # Note: We do not need to calculate per_class iou during training
+
+ if args.scheduler == 'cos':
+ scheduler.step()
+ elif args.scheduler == 'step':
+ if opt.param_groups[0]['lr'] > 0.9e-5:
+ scheduler.step()
+ if opt.param_groups[0]['lr'] < 0.9e-5:
+ for param_group in opt.param_groups:
+ param_group['lr'] = 0.9e-5
+ io.cprint('Learning rate: %f' % opt.param_groups[0]['lr'])
+
+ metrics['accuracy'] = np.mean(accuracy)
+ metrics['shape_avg_iou'] = shape_ious * 1.0 / count
+
+ outstr = 'Train %d, loss: %f, train acc: %f, train ins_iou: %f' % (epoch+1, train_loss * 1.0 / count,
+ metrics['accuracy'], metrics['shape_avg_iou'])
+ io.cprint(outstr)
+ # Write to tensorboard
+ writer.add_scalar('loss_train', train_loss * 1.0 / count, epoch + 1)
+ writer.add_scalar('Acc_train', metrics['accuracy'], epoch + 1)
+ writer.add_scalar('ins_iou', metrics['shape_avg_iou'])
+
+
+def test_epoch(test_loader, model, epoch, num_part, num_classes, io):
+ test_loss = 0.0
+ count = 0.0
+ accuracy = []
+ shape_ious = 0.0
+ final_total_per_cat_iou = np.zeros(16).astype(np.float32)
+ final_total_per_cat_seen = np.zeros(16).astype(np.int32)
+ metrics = defaultdict(lambda: list())
+ model.eval()
+
+ # label_size: b, means each sample has one corresponding class
+ for batch_id, (points, label, target) in tqdm(enumerate(test_loader), total=len(test_loader), smoothing=0.9):
+ batch_size, num_point, _ = points.size()
+ points, label, target = Variable(points.float()), Variable(label.long()), Variable(target.long())
+ points = points.transpose(2, 1)
+ points, label, target = points.cuda(non_blocking=True), label.squeeze(1).cuda(non_blocking=True), target.cuda(non_blocking=True)
+ seg_pred = model(points, to_categorical(label, num_classes)) # b,n,50
+
+ # instance iou without considering the class average at each batch_size:
+ batch_shapeious = compute_overall_iou(seg_pred, target, num_part) # [b]
+
+ # per category iou at each batch_size:
+ for shape_idx in range(seg_pred.size(0)): # sample_idx
+ cur_gt_label = label[shape_idx] # label[sample_idx], denotes current sample belongs to which cat
+ final_total_per_cat_iou[cur_gt_label] += batch_shapeious[shape_idx] # add the iou belongs to this cat
+ final_total_per_cat_seen[cur_gt_label] += 1 # count the number of this cat is chosen
+
+ # total iou of current batch in each process:
+ batch_ious = seg_pred.new_tensor([np.sum(batch_shapeious)], dtype=torch.float64) # same device with seg_pred!!!
+
+ # prepare seg_pred and target for later calculating loss and acc:
+ seg_pred = seg_pred.contiguous().view(-1, num_part)
+ target = target.view(-1, 1)[:, 0]
+ # Loss
+ loss = F.nll_loss(seg_pred.contiguous(), target.contiguous())
+
+ # accuracy:
+ pred_choice = seg_pred.data.max(1)[1] # b*n
+ correct = pred_choice.eq(target.data).sum() # torch.int64: total number of correct-predict pts
+
+ loss = torch.mean(loss)
+ shape_ious += batch_ious.item() # count the sum of ious in each iteration
+ count += batch_size # count the total number of samples in each iteration
+ test_loss += loss.item() * batch_size
+ accuracy.append(correct.item() / (batch_size * num_point)) # append the accuracy of each iteration
+
+ for cat_idx in range(16):
+ if final_total_per_cat_seen[cat_idx] > 0: # indicating this cat is included during previous iou appending
+ final_total_per_cat_iou[cat_idx] = final_total_per_cat_iou[cat_idx] / final_total_per_cat_seen[cat_idx] # avg class iou across all samples
+
+ metrics['accuracy'] = np.mean(accuracy)
+ metrics['shape_avg_iou'] = shape_ious * 1.0 / count
+
+ outstr = 'Test %d, loss: %f, test acc: %f test ins_iou: %f' % (epoch + 1, test_loss * 1.0 / count,
+ metrics['accuracy'], metrics['shape_avg_iou'])
+
+ io.cprint(outstr)
+ # Write to tensorboard
+ writer.add_scalar('loss_train', test_loss * 1.0 / count, epoch + 1)
+ writer.add_scalar('Acc_train', metrics['accuracy'], epoch + 1)
+ writer.add_scalar('ins_iou', metrics['shape_avg_iou'])
+
+ return metrics, final_total_per_cat_iou
+
+
+def test(args, io):
+ # Dataloader
+ test_data = PartNormalDataset(npoints=2048, split='test', normalize=False)
+ print("The number of test data is:%d", len(test_data))
+
+ test_loader = DataLoader(test_data, batch_size=args.test_batch_size, shuffle=False, num_workers=args.workers, drop_last=False)
+
+ # Try to load models
+ num_part = 50
+ device = torch.device("cuda" if args.cuda else "cpu")
+
+ model = PAConv(args, num_part).to(device)
+ io.cprint(str(model))
+
+ from collections import OrderedDict
+ state_dict = torch.load("checkpoints/%s/best_%s_model.pth" % (args.exp_name, args.model_type),
+ map_location=torch.device('cpu'))['model']
+
+ new_state_dict = OrderedDict()
+ for layer in state_dict:
+ new_state_dict[layer.replace('module.', '')] = state_dict[layer]
+ model.load_state_dict(new_state_dict)
+
+ model.eval()
+ num_part = 50
+ num_classes = 16
+ metrics = defaultdict(lambda: list())
+ hist_acc = []
+ shape_ious = []
+ total_per_cat_iou = np.zeros((16)).astype(np.float32)
+ total_per_cat_seen = np.zeros((16)).astype(np.int32)
+
+ for batch_id, (points, label, target) in tqdm(enumerate(test_loader), total=len(test_loader), smoothing=0.9):
+ batch_size, num_point, _ = points.size()
+ points, label, target = Variable(points.float()), Variable(label.long()), Variable(target.long())
+ points = points.transpose(2, 1)
+ points, label, target = points.cuda(non_blocking=True), label.squeeze().cuda(non_blocking=True), target.cuda(non_blocking=True)
+
+ with torch.no_grad():
+ seg_pred = model(points, to_categorical(label, num_classes)) # b,n,50
+
+ # instance iou without considering the class average at each batch_size:
+ batch_shapeious = compute_overall_iou(seg_pred, target, num_part) # [b]
+ shape_ious += batch_shapeious # iou +=, equals to .append
+
+ # per category iou at each batch_size:
+ for shape_idx in range(seg_pred.size(0)): # sample_idx
+ cur_gt_label = label[shape_idx] # label[sample_idx]
+ total_per_cat_iou[cur_gt_label] += batch_shapeious[shape_idx]
+ total_per_cat_seen[cur_gt_label] += 1
+
+ # accuracy:
+ seg_pred = seg_pred.contiguous().view(-1, num_part)
+ target = target.view(-1, 1)[:, 0]
+ pred_choice = seg_pred.data.max(1)[1]
+ correct = pred_choice.eq(target.data).cpu().sum()
+ metrics['accuracy'].append(correct.item() / (batch_size * num_point))
+
+ hist_acc += metrics['accuracy']
+ metrics['accuracy'] = np.mean(hist_acc)
+ metrics['shape_avg_iou'] = np.mean(shape_ious)
+ for cat_idx in range(16):
+ if total_per_cat_seen[cat_idx] > 0:
+ total_per_cat_iou[cat_idx] = total_per_cat_iou[cat_idx] / total_per_cat_seen[cat_idx]
+
+ # First we need to calculate the iou of each class and the avg class iou:
+ class_iou = 0
+ for cat_idx in range(16):
+ class_iou += total_per_cat_iou[cat_idx]
+ io.cprint(classes_str[cat_idx] + ' iou: ' + str(total_per_cat_iou[cat_idx])) # print the iou of each class
+ avg_class_iou = class_iou / 16
+ outstr = 'Test :: test acc: %f test class mIOU: %f, test instance mIOU: %f' % (metrics['accuracy'], avg_class_iou, metrics['shape_avg_iou'])
+ io.cprint(outstr)
+
+
+if __name__ == "__main__":
+ args = get_parser()
+ _init_()
+
+ if not args.eval:
+ io = IOStream('checkpoints/' + args.exp_name + '/%s_train.log' % (args.exp_name))
+ else:
+ io = IOStream('checkpoints/' + args.exp_name + '/%s_test.log' % (args.exp_name))
+ io.cprint(str(args))
+
+ if args.manual_seed is not None:
+ random.seed(args.manual_seed)
+ np.random.seed(args.manual_seed)
+ torch.manual_seed(args.manual_seed)
+
+ args.cuda = not args.no_cuda and torch.cuda.is_available()
+
+ if args.cuda:
+ io.cprint('Using GPU')
+ if args.manual_seed is not None:
+ torch.cuda.manual_seed(args.manual_seed)
+ torch.cuda.manual_seed_all(args.manual_seed)
+ else:
+ io.cprint('Using CPU')
+
+ if not args.eval:
+ train(args, io)
+ else:
+ test(args, io)
\ No newline at end of file
diff --git a/zoo/PAConv/part_seg/main_ddp.py b/zoo/PAConv/part_seg/main_ddp.py
new file mode 100644
index 0000000..90bf61a
--- /dev/null
+++ b/zoo/PAConv/part_seg/main_ddp.py
@@ -0,0 +1,568 @@
+from __future__ import print_function
+import os
+import argparse
+import torch
+import torch.multiprocessing as mp
+import torch.distributed as dist
+import torch.backends.cudnn as cudnn
+import torch.optim as optim
+from torch.optim.lr_scheduler import CosineAnnealingLR, StepLR, MultiStepLR
+from util.data_util import PartNormalDataset, ShapeNetPart
+import torch.nn.functional as F
+from model.DGCNN_PAConv import PAConv
+import numpy as np
+from torch.utils.data import DataLoader
+from util.util import to_categorical, compute_overall_iou, load_cfg_from_cfg_file, merge_cfg_from_list, find_free_port
+import time
+import logging
+import random
+from tqdm import tqdm
+from tensorboardX import SummaryWriter
+from collections import defaultdict
+from torch.autograd import Variable
+
+
+classes_str = ['aero','bag','cap','car','chair','ear','guitar','knife','lamp','lapt','moto','mug','Pistol','rock','stake','table']
+
+
+def get_logger():
+ logger_name = "main-logger"
+ logger = logging.getLogger(logger_name)
+ logger.setLevel(logging.INFO)
+ handler = logging.StreamHandler()
+ fmt = "[%(asctime)s %(levelname)s %(filename)s line %(lineno)d %(process)d] %(message)s"
+ handler.setFormatter(logging.Formatter(fmt))
+ logger.addHandler(handler)
+
+ file_handler = logging.FileHandler(os.path.join('checkpoints', args.exp_name, 'main-' + str(int(time.time())) + '.log'))
+ file_handler.setFormatter(logging.Formatter(fmt))
+ logger.addHandler(file_handler)
+
+ return logger
+
+
+def get_parser():
+ parser = argparse.ArgumentParser(description='3D Shape Part Segmentation')
+ parser.add_argument('--config', type=str, default='dgcnn_paconv_train.yaml', help='config file')
+ parser.add_argument('opts', help='see config/dgcnn_paconv_train.yaml for all options', default=None, nargs=argparse.REMAINDER)
+ args = parser.parse_args()
+ assert args.config is not None
+ cfg = load_cfg_from_cfg_file(args.config)
+ if args.opts is not None:
+ cfg = merge_cfg_from_list(cfg, args.opts)
+
+ cfg['sync_bn'] = cfg.get('sync_bn', True)
+ cfg['dist_url'] = cfg.get('dist_url', 'tcp://127.0.0.1:6789')
+ cfg['dist_backend'] = cfg.get('dist_backend', 'nccl')
+ cfg['multiprocessing_distributed'] = cfg.get('multiprocessing_distributed', True)
+ cfg['world_size'] = cfg.get('world_size', 1)
+ cfg['rank'] = cfg.get('rank', 0)
+ cfg['manual_seed'] = cfg.get('manual_seed', 0)
+ cfg['workers'] = cfg.get('workers', 6)
+ return cfg
+
+
+def worker_init_fn(worker_id):
+ random.seed(args.manual_seed + worker_id)
+
+
+def main_process():
+ return not args.multiprocessing_distributed or (args.multiprocessing_distributed and args.rank % args.ngpus_per_node == 0)
+
+
+def weight_init(m):
+ if isinstance(m, torch.nn.Linear):
+ torch.nn.init.kaiming_normal_(m.weight)
+ if m.bias is not None:
+ torch.nn.init.constant_(m.bias, 0)
+ elif isinstance(m, torch.nn.Conv2d):
+ torch.nn.init.kaiming_normal_(m.weight)
+ if m.bias is not None:
+ torch.nn.init.constant_(m.bias, 0)
+ elif isinstance(m, torch.nn.Conv1d):
+ torch.nn.init.kaiming_normal_(m.weight)
+ if m.bias is not None:
+ torch.nn.init.constant_(m.bias, 0)
+ elif isinstance(m, torch.nn.BatchNorm2d):
+ torch.nn.init.constant_(m.weight, 1)
+ torch.nn.init.constant_(m.bias, 0)
+ elif isinstance(m, torch.nn.BatchNorm1d):
+ torch.nn.init.constant_(m.weight, 1)
+ torch.nn.init.constant_(m.bias, 0)
+
+
+def train(gpu, ngpus_per_node):
+ # ============= Model ===================
+ num_part = 50
+ model = PAConv(args, num_part)
+
+ model.apply(weight_init)
+
+ if main_process():
+ logger.info(model)
+
+ if args.sync_bn and args.distributed:
+ model = torch.nn.SyncBatchNorm.convert_sync_batchnorm(model)
+
+ if args.distributed:
+ torch.cuda.set_device(gpu)
+ args.batch_size = int(args.batch_size / ngpus_per_node)
+ args.test_batch_size = int(args.test_batch_size / ngpus_per_node)
+ args.workers = int((args.workers + ngpus_per_node - 1) / ngpus_per_node)
+ model = torch.nn.parallel.DistributedDataParallel(model.cuda(), device_ids=[gpu], find_unused_parameters=True)
+ else:
+ model = torch.nn.DataParallel(model.cuda())
+
+ '''Use Pretrain or not'''
+ if args.get('pretrain', False):
+ state_dict = torch.load("checkpoints/%s/best_insiou_model.pth" % args.exp_name,
+ map_location=torch.device('cpu'))['model']
+ for k in state_dict.keys():
+ if 'module' not in k:
+ from collections import OrderedDict
+ new_state_dict = OrderedDict()
+ for k in state_dict:
+ new_state_dict['module.' + k] = state_dict[k]
+ state_dict = new_state_dict
+ break
+ model.load_state_dict(state_dict)
+ if main_process():
+ logger.info("Using pretrained model...")
+ logger.info(torch.load("checkpoints/%s/best_insiou_model.pth" % args.exp_name).keys())
+ else:
+ if main_process():
+ logger.info("Training from scratch...")
+
+ # =========== Dataloader =================
+ # train_data = PartNormalDataset(npoints=2048, split='trainval', normalize=False)
+ train_data = ShapeNetPart(partition='trainval', num_points=2048, class_choice=None)
+ if main_process():
+ logger.info("The number of training data is:%d", len(train_data))
+
+ # test_data = PartNormalDataset(npoints=2048, split='test', normalize=False)
+ test_data = ShapeNetPart(partition='test', num_points=2048, class_choice=None)
+ if main_process():
+ logger.info("The number of test data is:%d", len(test_data))
+
+ if args.distributed:
+ train_sampler = torch.utils.data.distributed.DistributedSampler(train_data)
+ test_sampler = torch.utils.data.distributed.DistributedSampler(test_data)
+ else:
+ train_sampler = None
+ test_sampler = None
+
+ train_loader = torch.utils.data.DataLoader(train_data, batch_size=args.batch_size, shuffle=(train_sampler is None),
+ num_workers=args.workers, pin_memory=True, sampler=train_sampler)
+ test_loader = torch.utils.data.DataLoader(test_data, batch_size=args.test_batch_size, shuffle=False,
+ num_workers=args.workers, pin_memory=True, sampler=test_sampler)
+
+ # ============= Optimizer ===================
+ if args.use_sgd:
+ if main_process():
+ logger.info("Use SGD")
+ opt = optim.SGD(model.parameters(), lr=args.lr, momentum=args.momentum, weight_decay=args.weight_decay)
+ else:
+ if main_process():
+ logger.info("Use Adam")
+ opt = optim.Adam(model.parameters(), lr=args.lr, betas=(0.9, 0.999), eps=1e-08, weight_decay=args.weight_decay)
+
+ if args.scheduler == 'cos':
+ if main_process():
+ logger.info("Use CosLR")
+ scheduler = CosineAnnealingLR(opt, args.epochs, eta_min=args.lr / 100)
+ else:
+ if main_process():
+ logger.info("Use StepLR")
+ scheduler = StepLR(opt, step_size=args.step, gamma=0.5)
+
+ # ============= Training =================
+ best_acc = 0
+ best_class_iou = 0
+ best_instance_iou = 0
+ num_part = 50
+ num_classes = 16
+
+ for epoch in range(args.epochs):
+ if args.distributed:
+ train_sampler.set_epoch(epoch)
+
+ train_epoch(train_loader, model, opt, scheduler, epoch, num_part, num_classes)
+
+ test_metrics, total_per_cat_iou = test_epoch(test_loader, model, epoch, num_part, num_classes)
+
+ # 1. when get the best accuracy, save the model:
+ if test_metrics['accuracy'] > best_acc and main_process():
+ best_acc = test_metrics['accuracy']
+ logger.info('Max Acc:%.5f' % best_acc)
+ state = {
+ 'model': model.module.state_dict() if torch.cuda.device_count() > 1 else model.state_dict(),
+ 'optimizer': opt.state_dict(), 'epoch': epoch, 'test_acc': best_acc}
+ torch.save(state, 'checkpoints/%s/best_acc_model.pth' % args.exp_name)
+
+ # 2. when get the best instance_iou, save the model:
+ if test_metrics['shape_avg_iou'] > best_instance_iou and main_process():
+ best_instance_iou = test_metrics['shape_avg_iou']
+ logger.info('Max instance iou:%.5f' % best_instance_iou)
+ state = {
+ 'model': model.module.state_dict() if torch.cuda.device_count() > 1 else model.state_dict(),
+ 'optimizer': opt.state_dict(), 'epoch': epoch, 'test_instance_iou': best_instance_iou}
+ torch.save(state, 'checkpoints/%s/best_insiou_model.pth' % args.exp_name)
+
+ # 3. when get the best class_iou, save the model:
+ # first we need to calculate the average per-class iou
+ class_iou = 0
+ for cat_idx in range(16):
+ class_iou += total_per_cat_iou[cat_idx]
+ avg_class_iou = class_iou / 16
+ if avg_class_iou > best_class_iou and main_process():
+ best_class_iou = avg_class_iou
+ # print the iou of each class:
+ for cat_idx in range(16):
+ if main_process():
+ logger.info(classes_str[cat_idx] + ' iou: ' + str(total_per_cat_iou[cat_idx]))
+ logger.info('Max class iou:%.5f' % best_class_iou)
+ state = {
+ 'model': model.module.state_dict() if torch.cuda.device_count() > 1 else model.state_dict(),
+ 'optimizer': opt.state_dict(), 'epoch': epoch, 'test_class_iou': best_class_iou}
+ torch.save(state, 'checkpoints/%s/best_clsiou_model.pth' % args.exp_name)
+
+ if main_process():
+ # report best acc, ins_iou, cls_iou
+ logger.info('Final Max Acc:%.5f' % best_acc)
+ logger.info('Final Max instance iou:%.5f' % best_instance_iou)
+ logger.info('Final Max class iou:%.5f' % best_class_iou)
+ # save last model
+ state = {
+ 'model': model.module.state_dict() if torch.cuda.device_count() > 1 else model.state_dict(),
+ 'optimizer': opt.state_dict(), 'epoch': args.epochs - 1, 'test_iou': best_instance_iou}
+ torch.save(state, 'checkpoints/%s/model_ep%d.pth' % (args.exp_name, args.epochs))
+
+
+def train_epoch(train_loader, model, opt, scheduler, epoch, num_part, num_classes):
+ train_loss = 0.0
+ count = 0.0
+ accuracy = []
+ shape_ious = 0.0
+ metrics = defaultdict(lambda: list())
+ model.train()
+
+ torch.backends.cudnn.enabled = False ###
+
+ for batch_id, (points, label, target) in tqdm(enumerate(train_loader), total=len(train_loader), smoothing=0.9):
+ batch_size, num_point, _ = points.size()
+ points, label, target = Variable(points.float()), Variable(label.long()), Variable(target.long())
+ points = points.permute(0, 2, 1)
+ points, label, target = points.cuda(non_blocking=True), label.squeeze(1).cuda(non_blocking=True), target.cuda(non_blocking=True)
+ # target: b,n
+ seg_pred, loss = model(points, to_categorical(label, num_classes), target) # seg_pred: b,n,50
+ seg_pred = seg_pred.contiguous() ###
+
+ # instance iou without considering the class average at each batch_size:
+ batch_shapeious = compute_overall_iou(seg_pred, target, num_part) # list of of current batch_iou:[iou1,iou2,...,iou#b_size]
+ # total iou of current batch in each process:
+ batch_shapeious = seg_pred.new_tensor([np.sum(batch_shapeious)], dtype=torch.float64) # same device with seg_pred!!!
+
+ # Loss backward
+ if not args.multiprocessing_distributed:
+ loss = torch.mean(loss)
+ opt.zero_grad()
+ loss.backward()
+ opt.step()
+
+ # accuracy
+ seg_pred = seg_pred.contiguous().view(-1, num_part) # b*n,50
+ target = target.view(-1, 1)[:, 0] # b*n
+ pred_choice = seg_pred.contiguous().data.max(1)[1] # b*n
+ correct = pred_choice.eq(target.contiguous().data).sum() # torch.int64: total number of correct-predict pts
+
+ if args.multiprocessing_distributed:
+ _count = seg_pred.new_tensor([batch_size], dtype=torch.long) # same device with seg_pred!!!
+ dist.all_reduce(loss)
+ dist.all_reduce(_count)
+ dist.all_reduce(batch_shapeious) # sum the batch_ious across all processes
+ dist.all_reduce(correct) # sum the correct across all processes
+ # ! batch_size: the total number of samples in one iteration when with dist, equals to batch_size when without dist:
+ batch_size = _count.item()
+ shape_ious += batch_shapeious.item() # count the sum of ious in each iteration
+ count += batch_size # count the total number of samples in each iteration
+ train_loss += loss.item() * batch_size
+ accuracy.append(correct.item()/(batch_size * num_point)) # append the accuracy of each iteration
+
+ # Note: We do not need to calculate per_class iou during training
+
+ if args.scheduler == 'cos':
+ scheduler.step()
+ elif args.scheduler == 'step':
+ if opt.param_groups[0]['lr'] > 0.9e-5:
+ scheduler.step()
+ if opt.param_groups[0]['lr'] < 0.9e-5:
+ for param_group in opt.param_groups:
+ param_group['lr'] = 0.9e-5
+ if main_process():
+ logger.info('Learning rate: %f', opt.param_groups[0]['lr'])
+
+ metrics['accuracy'] = np.mean(accuracy)
+ metrics['shape_avg_iou'] = shape_ious * 1.0 / count
+
+ outstr = 'Train %d, loss: %f, train acc: %f, train ins_iou: %f' % (epoch+1, train_loss * 1.0 / count,
+ metrics['accuracy'], metrics['shape_avg_iou'])
+
+ if main_process():
+ logger.info(outstr)
+ # Write to tensorboard
+ writer.add_scalar('loss_train', train_loss * 1.0 / count, epoch + 1)
+ writer.add_scalar('Acc_train', metrics['accuracy'], epoch + 1)
+ writer.add_scalar('ins_iou', metrics['shape_avg_iou'])
+
+
+def test_epoch(test_loader, model, epoch, num_part, num_classes):
+ test_loss = 0.0
+ count = 0.0
+ accuracy = []
+ shape_ious = 0.0
+ final_total_per_cat_iou = np.zeros(16).astype(np.float32)
+ final_total_per_cat_seen = np.zeros(16).astype(np.int32)
+ metrics = defaultdict(lambda: list())
+ model.eval()
+
+ # label_size: b, means each sample has one corresponding class
+ for batch_id, (points, label, target) in tqdm(enumerate(test_loader), total=len(test_loader), smoothing=0.9):
+ batch_size, num_point, _ = points.size()
+ points, label, target = Variable(points.float()), Variable(label.long()), Variable(target.long())
+ points = points.permute(0, 2, 1)
+ points, label, target = points.cuda(non_blocking=True), label.squeeze(1).cuda(non_blocking=True), target.cuda(non_blocking=True)
+ seg_pred = model(points, to_categorical(label, num_classes)) # b,n,50
+
+ # instance iou without considering the class average at each batch_size:
+ batch_shapeious = compute_overall_iou(seg_pred, target, num_part) # [b]
+ # per category iou at each batch_size:
+ if args.multiprocessing_distributed:
+ # creat new zero to only count current iter to avoid counting the value of last iterations twice in reduce!
+ cur_total_per_cat_iou = np.zeros(16).astype(np.float32)
+ cur_total_per_cat_seen = np.zeros(16).astype(np.int32)
+ for shape_idx in range(seg_pred.size(0)): # sample_idx
+ cur_gt_label = label[shape_idx] # label[sample_idx], denotes current sample belongs to which cat
+ cur_total_per_cat_iou[cur_gt_label] += batch_shapeious[shape_idx] # add the iou belongs to this cat
+ cur_total_per_cat_seen[cur_gt_label] += 1 # count the number of this cat is chosen
+ else:
+ for shape_idx in range(seg_pred.size(0)): # sample_idx
+ cur_gt_label = label[shape_idx] # label[sample_idx], denotes current sample belongs to which cat
+ final_total_per_cat_iou[cur_gt_label] += batch_shapeious[shape_idx] # add the iou belongs to this cat
+ final_total_per_cat_seen[cur_gt_label] += 1 # count the number of this cat is chosen
+
+ # total iou of current batch in each process:
+ batch_ious = seg_pred.new_tensor([np.sum(batch_shapeious)], dtype=torch.float64) # same device with seg_pred!!!
+
+ # prepare seg_pred and target for later calculating loss and acc:
+ seg_pred = seg_pred.contiguous().view(-1, num_part)
+ target = target.view(-1, 1)[:, 0]
+ # Loss
+ loss = F.nll_loss(seg_pred.contiguous(), target.contiguous())
+
+ # accuracy:
+ pred_choice = seg_pred.data.max(1)[1] # b*n
+ correct = pred_choice.eq(target.data).sum() # torch.int64: total number of correct-predict pts
+ if args.multiprocessing_distributed:
+ _count = seg_pred.new_tensor([batch_size], dtype=torch.long) # same device with seg_pred!!!
+ dist.all_reduce(loss)
+ dist.all_reduce(_count)
+ dist.all_reduce(batch_ious) # sum the batch_ious across all processes
+ dist.all_reduce(correct) # sum the correct across all processes
+
+ cur_total_per_cat_iou = seg_pred.new_tensor(cur_total_per_cat_iou, dtype=torch.float32) # same device with seg_pred!!!
+ cur_total_per_cat_seen = seg_pred.new_tensor(cur_total_per_cat_seen, dtype=torch.int32) # same device with seg_pred!!!
+ dist.all_reduce(cur_total_per_cat_iou) # sum the per_cat_iou across all processes (element-wise)
+ dist.all_reduce(cur_total_per_cat_seen) # sum the per_cat_seen across all processes (element-wise)
+ final_total_per_cat_iou += cur_total_per_cat_iou.cpu().numpy()
+ final_total_per_cat_seen += cur_total_per_cat_seen.cpu().numpy()
+ # ! batch_size: the total number of samples in one iteration when with dist, equals to batch_size when without dist:
+ batch_size = _count.item()
+ else:
+ loss = torch.mean(loss)
+ shape_ious += batch_ious.item() # count the sum of ious in each iteration
+ count += batch_size # count the total number of samples in each iteration
+ test_loss += loss.item() * batch_size
+ accuracy.append(correct.item() / (batch_size * num_point)) # append the accuracy of each iteration
+
+ for cat_idx in range(16):
+ if final_total_per_cat_seen[cat_idx] > 0: # indicating this cat is included during previous iou appending
+ final_total_per_cat_iou[cat_idx] = final_total_per_cat_iou[cat_idx] / final_total_per_cat_seen[cat_idx] # avg class iou across all samples
+
+ metrics['accuracy'] = np.mean(accuracy)
+ metrics['shape_avg_iou'] = shape_ious * 1.0 / count
+
+ outstr = 'Test %d, loss: %f, test acc: %f test ins_iou: %f' % (epoch + 1, test_loss * 1.0 / count,
+ metrics['accuracy'], metrics['shape_avg_iou'])
+
+ if main_process():
+ logger.info(outstr)
+ # Write to tensorboard
+ writer.add_scalar('loss_train', test_loss * 1.0 / count, epoch + 1)
+ writer.add_scalar('Acc_train', metrics['accuracy'], epoch + 1)
+ writer.add_scalar('ins_iou', metrics['shape_avg_iou'])
+
+ return metrics, final_total_per_cat_iou
+
+
+def test(gpu, ngpus_per_node):
+ # Try to load models
+ num_part = 50
+ model = PAConv(args, num_part)
+
+ from collections import OrderedDict
+ state_dict = torch.load("checkpoints/%s/best_%s_model.pth" % (args.exp_name, args.model_type),
+ map_location=torch.device('cpu'))['model']
+
+ new_state_dict = OrderedDict()
+ for layer in state_dict:
+ new_state_dict[layer.replace('module.', '')] = state_dict[layer]
+ model.load_state_dict(new_state_dict)
+
+ if main_process():
+ logger.info(model)
+
+ if args.sync_bn:
+ assert args.distributed == True
+ model = torch.nn.SyncBatchNorm.convert_sync_batchnorm(model)
+
+ if args.distributed:
+ torch.cuda.set_device(gpu)
+ args.batch_size = int(args.batch_size / ngpus_per_node)
+ args.test_batch_size = int(args.test_batch_size / ngpus_per_node)
+ args.workers = int((args.workers + ngpus_per_node - 1) / ngpus_per_node)
+ model = torch.nn.parallel.DistributedDataParallel(model.cuda(), device_ids=[gpu], find_unused_parameters=True)
+ else:
+ model = model.cuda()
+
+ # Dataloader
+ # test_data = PartNormalDataset(npoints=2048, split='test', normalize=False)
+ test_data = ShapeNetPart(partition='test', num_points=2048, class_choice=None)
+ if main_process():
+ logger.info("The number of test data is:%d", len(test_data))
+
+ if args.distributed:
+ test_sampler = torch.utils.data.distributed.DistributedSampler(test_data)
+ else:
+ test_sampler = None
+ test_loader = torch.utils.data.DataLoader(test_data, batch_size=args.test_batch_size, shuffle=False,
+ num_workers=args.workers, pin_memory=True, sampler=test_sampler)
+
+ model.eval()
+ num_part = 50
+ num_classes = 16
+ metrics = defaultdict(lambda: list())
+ hist_acc = []
+ shape_ious = []
+ total_per_cat_iou = np.zeros((16)).astype(np.float32)
+ total_per_cat_seen = np.zeros((16)).astype(np.int32)
+
+ for batch_id, (points, label, target) in tqdm(enumerate(test_loader), total=len(test_loader), smoothing=0.9):
+ batch_size, num_point, _ = points.size()
+ points, label, target = Variable(points.float()), Variable(label.long()), Variable(target.long())
+ points = points.permute(0, 2, 1)
+ points, label, target = points.cuda(non_blocking=True), label.squeeze().cuda(non_blocking=True), target.cuda(non_blocking=True)
+
+ with torch.no_grad():
+ seg_pred = model(points, to_categorical(label, num_classes)) # b,n,50
+
+ # instance iou without considering the class average at each batch_size:
+ batch_shapeious = compute_overall_iou(seg_pred, target, num_part) # [b]
+ shape_ious += batch_shapeious # iou +=, equals to .append
+
+ # per category iou at each batch_size:
+ for shape_idx in range(seg_pred.size(0)): # sample_idx
+ cur_gt_label = label[shape_idx] # label[sample_idx]
+ total_per_cat_iou[cur_gt_label] += batch_shapeious[shape_idx]
+ total_per_cat_seen[cur_gt_label] += 1
+
+ # accuracy:
+ seg_pred = seg_pred.contiguous().view(-1, num_part)
+ target = target.view(-1, 1)[:, 0]
+ pred_choice = seg_pred.data.max(1)[1]
+ correct = pred_choice.eq(target.data).cpu().sum()
+ metrics['accuracy'].append(correct.item() / (batch_size * num_point))
+
+ hist_acc += metrics['accuracy']
+ metrics['accuracy'] = np.mean(hist_acc)
+ metrics['shape_avg_iou'] = np.mean(shape_ious)
+ for cat_idx in range(16):
+ if total_per_cat_seen[cat_idx] > 0:
+ total_per_cat_iou[cat_idx] = total_per_cat_iou[cat_idx] / total_per_cat_seen[cat_idx]
+
+ # First we need to calculate the iou of each class and the avg class iou:
+ class_iou = 0
+ for cat_idx in range(16):
+ class_iou += total_per_cat_iou[cat_idx]
+ if main_process():
+ logger.info(classes_str[cat_idx] + ' iou: ' + str(total_per_cat_iou[cat_idx])) # print the iou of each class
+ avg_class_iou = class_iou / 16
+ outstr = 'Test :: test acc: %f test class mIOU: %f, test instance mIOU: %f' % (metrics['accuracy'], avg_class_iou, metrics['shape_avg_iou'])
+ if main_process():
+ logger.info(outstr)
+
+
+def main_worker(gpu, ngpus_per_node, argss):
+ global args
+ args = argss
+
+ if args.distributed:
+ if args.dist_url == "env://" and args.rank == -1:
+ args.rank = int(os.environ["RANK"])
+ if args.multiprocessing_distributed:
+ args.rank = args.rank * ngpus_per_node + gpu
+ dist.init_process_group(backend=args.dist_backend, init_method=args.dist_url, world_size=args.world_size,
+ rank=args.rank)
+
+ if main_process():
+ if not os.path.exists('checkpoints'):
+ os.makedirs('checkpoints')
+ if not os.path.exists('checkpoints/' + args.exp_name):
+ os.makedirs('checkpoints/' + args.exp_name)
+
+ if not args.eval: # backup the running files
+ os.system('cp main_ddp.py checkpoints' + '/' + args.exp_name + '/' + 'main_ddp.py.backup')
+ os.system('cp util/PAConv_util.py checkpoints' + '/' + args.exp_name + '/' + 'PAConv_util.py.backup')
+ os.system('cp util/data_util.py checkpoints' + '/' + args.exp_name + '/' + 'data_util.py.backup')
+ os.system('cp DGCNN_PAConv.py checkpoints' + '/' + args.exp_name + '/' + 'DGCNN_PAConv.py.backup')
+
+ global logger, writer
+ writer = SummaryWriter('checkpoints/' + args.exp_name)
+ logger = get_logger()
+ logger.info(args)
+
+ args.cuda = not args.no_cuda and torch.cuda.is_available()
+
+ assert not args.eval, "The all_reduce function of PyTorch DDP will ignore/repeat inputs " \
+ "(leading to the wrong result), " \
+ "please use main.py to test (avoid DDP) for getting the right result."
+
+ train(gpu, ngpus_per_node)
+
+
+if __name__ == "__main__":
+ args = get_parser()
+ args.gpu = [int(i) for i in os.environ['CUDA_VISIBLE_DEVICES'].split(',')]
+ if args.manual_seed is not None:
+ random.seed(args.manual_seed)
+ np.random.seed(args.manual_seed)
+ torch.manual_seed(args.manual_seed)
+ torch.cuda.manual_seed(args.manual_seed)
+ torch.cuda.manual_seed_all(args.manual_seed)
+ cudnn.benchmark = False
+ cudnn.deterministic = True
+ if args.dist_url == "env://" and args.world_size == -1:
+ args.world_size = int(os.environ["WORLD_SIZE"])
+ args.distributed = args.world_size > 1 or args.multiprocessing_distributed
+ args.ngpus_per_node = len(args.gpu)
+ if len(args.gpu) == 1:
+ args.sync_bn = False
+ args.distributed = False
+ args.multiprocessing_distributed = False
+ if args.multiprocessing_distributed:
+ port = find_free_port()
+ args.dist_url = f"tcp://127.0.0.1:{port}"
+ args.world_size = args.ngpus_per_node * args.world_size
+ mp.spawn(main_worker, nprocs=args.ngpus_per_node, args=(args.ngpus_per_node, args))
+ else:
+ main_worker(args.gpu, args.ngpus_per_node, args)
+
diff --git a/zoo/PAConv/part_seg/model/DGCNN_PAConv.py b/zoo/PAConv/part_seg/model/DGCNN_PAConv.py
new file mode 100644
index 0000000..6ca2b94
--- /dev/null
+++ b/zoo/PAConv/part_seg/model/DGCNN_PAConv.py
@@ -0,0 +1,131 @@
+"""
+Embed PAConv into DGCNN
+"""
+
+import torch
+import torch.nn as nn
+import torch.nn.functional as F
+from util.PAConv_util import knn, get_graph_feature, get_scorenet_input, feat_trans_dgcnn, ScoreNet
+from cuda_lib.functional import assign_score_withk as assemble_dgcnn
+
+
+class PAConv(nn.Module):
+ def __init__(self, args, num_part):
+ super(PAConv, self).__init__()
+ # baseline args:
+ self.args = args
+ self.num_part = num_part
+ # PAConv args:
+ self.k = args.get('k_neighbors', 30)
+ self.calc_scores = args.get('calc_scores', 'softmax')
+ self.hidden = args.get('hidden', [[16], [16], [16], [16]]) # the hidden layers of ScoreNet
+
+ self.m2, self.m3, self.m4, self.m5 = args.get('num_matrices', [8, 8, 8, 8])
+ self.scorenet2 = ScoreNet(10, self.m2, hidden_unit=self.hidden[0])
+ self.scorenet3 = ScoreNet(10, self.m3, hidden_unit=self.hidden[1])
+ self.scorenet4 = ScoreNet(10, self.m4, hidden_unit=self.hidden[2])
+ self.scorenet5 = ScoreNet(10, self.m5, hidden_unit=self.hidden[3])
+
+ i2 = 64 # channel dim of input_2nd
+ o2 = i3 = 64 # channel dim of output_2st and input_3rd
+ o3 = i4 = 64 # channel dim of output_3rd and input_4th
+ o4 = i5 = 64 # channel dim of output_4th and input_5th
+ o5 = 64 # channel dim of output_5th
+
+ tensor2 = nn.init.kaiming_normal_(torch.empty(self.m2, i2 * 2, o2), nonlinearity='relu') \
+ .permute(1, 0, 2).contiguous().view(i2 * 2, self.m2 * o2)
+ tensor3 = nn.init.kaiming_normal_(torch.empty(self.m3, i3 * 2, o3), nonlinearity='relu') \
+ .permute(1, 0, 2).contiguous().view(i3 * 2, self.m3 * o3)
+ tensor4 = nn.init.kaiming_normal_(torch.empty(self.m4, i4 * 2, o4), nonlinearity='relu') \
+ .permute(1, 0, 2).contiguous().view(i4 * 2, self.m4 * o4)
+ tensor5 = nn.init.kaiming_normal_(torch.empty(self.m5, i5 * 2, o5), nonlinearity='relu') \
+ .permute(1, 0, 2).contiguous().view(i4 * 2, self.m5 * o5)
+
+ self.matrice2 = nn.Parameter(tensor2, requires_grad=True)
+ self.matrice3 = nn.Parameter(tensor3, requires_grad=True)
+ self.matrice4 = nn.Parameter(tensor4, requires_grad=True)
+ self.matrice5 = nn.Parameter(tensor5, requires_grad=True)
+
+ self.bn2 = nn.BatchNorm1d(64, momentum=0.1)
+ self.bn3 = nn.BatchNorm1d(64, momentum=0.1)
+ self.bn4 = nn.BatchNorm1d(64, momentum=0.1)
+ self.bn5 = nn.BatchNorm1d(64, momentum=0.1)
+
+ self.bnt = nn.BatchNorm1d(1024, momentum=0.1)
+ self.bnc = nn.BatchNorm1d(64, momentum=0.1)
+
+ self.bn6 = nn.BatchNorm1d(256, momentum=0.1)
+ self.bn7 = nn.BatchNorm1d(256, momentum=0.1)
+ self.bn8 = nn.BatchNorm1d(128, momentum=0.1)
+
+ self.conv1 = nn.Sequential(nn.Conv2d(6, 64, kernel_size=1, bias=True), nn.BatchNorm2d(64, momentum=0.1))
+
+ self.convt = nn.Sequential(nn.Conv1d(64*5, 1024, kernel_size=1, bias=False), self.bnt)
+ self.convc = nn.Sequential(nn.Conv1d(16, 64, kernel_size=1, bias=False), self.bnc)
+
+ self.conv6 = nn.Sequential(nn.Conv1d(1088+64*5, 256, kernel_size=1, bias=False), self.bn6)
+ self.dp1 = nn.Dropout(p=args.get('dropout', 0.4))
+ self.conv7 = nn.Sequential(nn.Conv1d(256, 256, kernel_size=1, bias=False), self.bn7)
+ self.dp2 = nn.Dropout(p=args.get('dropout', 0.4))
+ self.conv8 = nn.Sequential(nn.Conv1d(256, 128, kernel_size=1, bias=False), self.bn8)
+ self.conv9 = nn.Conv1d(128, num_part, kernel_size=1, bias=True)
+
+ def forward(self, x, cls_label, gt=None):
+ B, C, N = x.size()
+ idx, _ = knn(x, k=self.k) # different with DGCNN, the knn search is only in 3D space
+ xyz = get_scorenet_input(x, k=self.k, idx=idx) # ScoreNet input
+ #################
+ # use MLP at the 1st layer, same with DGCNN
+ x = get_graph_feature(x, k=self.k, idx=idx)
+ x = x.permute(0, 3, 1, 2) # b,2cin,n,k
+ x = F.relu(self.conv1(x))
+ x1 = x.max(dim=-1, keepdim=False)[0]
+ #################
+ # replace the last 4 DGCNN-EdgeConv with PAConv:
+ """CUDA implementation of PAConv: (presented in the supplementary material of the paper)"""
+ """feature transformation:"""
+ x2, center2 = feat_trans_dgcnn(point_input=x1, kernel=self.matrice2, m=self.m2)
+ score2 = self.scorenet2(xyz, calc_scores=self.calc_scores, bias=0)
+ """assemble with scores:"""
+ x = assemble_dgcnn(score=score2, point_input=x2, center_input=center2, knn_idx=idx, aggregate='sum')
+ x2 = F.relu(self.bn2(x))
+
+ x3, center3 = feat_trans_dgcnn(point_input=x2, kernel=self.matrice3, m=self.m3)
+ score3 = self.scorenet3(xyz, calc_scores=self.calc_scores, bias=0)
+ x = assemble_dgcnn(score=score3, point_input=x3, center_input=center3, knn_idx=idx, aggregate='sum')
+ x3 = F.relu(self.bn3(x))
+
+ x4, center4 = feat_trans_dgcnn(point_input=x3, kernel=self.matrice4, m=self.m4)
+ score4 = self.scorenet4(xyz, calc_scores=self.calc_scores, bias=0)
+ x = assemble_dgcnn(score=score4, point_input=x4, center_input=center4, knn_idx=idx, aggregate='sum')
+ x4 = F.relu(self.bn4(x))
+
+ x5, center5 = feat_trans_dgcnn(point_input=x4, kernel=self.matrice5, m=self.m5)
+ score5 = self.scorenet5(xyz, calc_scores=self.calc_scores, bias=0)
+ x = assemble_dgcnn(score=score5, point_input=x5, center_input=center5, knn_idx=idx, aggregate='sum')
+ x5 = F.relu(self.bn5(x))
+ ###############
+ xx = torch.cat((x1, x2, x3, x4, x5), dim=1)
+
+ xc = F.relu(self.convt(xx))
+ xc = F.adaptive_max_pool1d(xc, 1).view(B, -1)
+
+ cls_label = cls_label.view(B, 16, 1)
+ cls_label = F.relu(self.convc(cls_label))
+ cls = torch.cat((xc.view(B, 1024, 1), cls_label), dim=1)
+ cls = cls.repeat(1, 1, N) # B,1088,N
+
+ x = torch.cat((xx, cls), dim=1) # 1088+64*3
+ x = F.relu(self.conv6(x))
+ x = self.dp1(x)
+ x = F.relu(self.conv7(x))
+ x = self.dp2(x)
+ x = F.relu(self.conv8(x))
+ x = self.conv9(x)
+ x = F.log_softmax(x, dim=1)
+ x = x.permute(0, 2, 1) # b,n,50
+
+ if gt is not None:
+ return x, F.nll_loss(x.contiguous().view(-1, self.num_part), gt.view(-1, 1)[:, 0])
+ else:
+ return x
\ No newline at end of file
diff --git a/zoo/PAConv/part_seg/model/DGCNN_PAConv2.py b/zoo/PAConv/part_seg/model/DGCNN_PAConv2.py
new file mode 100644
index 0000000..af821c8
--- /dev/null
+++ b/zoo/PAConv/part_seg/model/DGCNN_PAConv2.py
@@ -0,0 +1,143 @@
+"""
+Embed PAConv into DGCNN
+"""
+
+import torch
+import torch.nn as nn
+import torch.nn.functional as F
+from util.PAConv_util import knn, get_graph_feature, get_scorenet_input, feat_trans_dgcnn, ScoreNet
+from cuda_lib.functional import assign_score_withk as assemble_dgcnn
+
+
+class PAConv(nn.Module):
+ def __init__(self, num_part):
+ super(PAConv, self).__init__()
+ # baseline args:
+ # self.args = args
+ self.num_part = num_part
+ # PAConv args:
+ # self.k = args.get('k_neighbors', 30)
+ self.k = 30
+ # self.calc_scores = args.get('calc_scores', 'softmax')
+ self.calc_scores = 'softmax'
+ # self.hidden = args.get('hidden', [[16], [16], [16], [16]]) # the hidden layers of ScoreNet
+ self.hidden = [[16,16,16],[16,16,16],[16,16,16],[16,16,16]]
+
+ # self.m2, self.m3, self.m4, self.m5 = args.get('num_matrices', [8, 8, 8, 8])
+ self.m2, self.m3, self.m4, self.m5 = [8, 8, 8, 8]
+ self.scorenet2 = ScoreNet(10, self.m2, hidden_unit=self.hidden[0])
+ self.scorenet3 = ScoreNet(10, self.m3, hidden_unit=self.hidden[1])
+ self.scorenet4 = ScoreNet(10, self.m4, hidden_unit=self.hidden[2])
+ self.scorenet5 = ScoreNet(10, self.m5, hidden_unit=self.hidden[3])
+
+ i2 = 64 # channel dim of input_2nd
+ o2 = i3 = 64 # channel dim of output_2st and input_3rd
+ o3 = i4 = 64 # channel dim of output_3rd and input_4th
+ o4 = i5 = 64 # channel dim of output_4th and input_5th
+ o5 = 64 # channel dim of output_5th
+
+ tensor2 = nn.init.kaiming_normal_(torch.empty(self.m2, i2 * 2, o2), nonlinearity='relu') \
+ .permute(1, 0, 2).contiguous().view(i2 * 2, self.m2 * o2)
+ tensor3 = nn.init.kaiming_normal_(torch.empty(self.m3, i3 * 2, o3), nonlinearity='relu') \
+ .permute(1, 0, 2).contiguous().view(i3 * 2, self.m3 * o3)
+ tensor4 = nn.init.kaiming_normal_(torch.empty(self.m4, i4 * 2, o4), nonlinearity='relu') \
+ .permute(1, 0, 2).contiguous().view(i4 * 2, self.m4 * o4)
+ tensor5 = nn.init.kaiming_normal_(torch.empty(self.m5, i5 * 2, o5), nonlinearity='relu') \
+ .permute(1, 0, 2).contiguous().view(i4 * 2, self.m5 * o5)
+
+ self.matrice2 = nn.Parameter(tensor2, requires_grad=True)
+ self.matrice3 = nn.Parameter(tensor3, requires_grad=True)
+ self.matrice4 = nn.Parameter(tensor4, requires_grad=True)
+ self.matrice5 = nn.Parameter(tensor5, requires_grad=True)
+
+ self.bn2 = nn.BatchNorm1d(64, momentum=0.1)
+ self.bn3 = nn.BatchNorm1d(64, momentum=0.1)
+ self.bn4 = nn.BatchNorm1d(64, momentum=0.1)
+ self.bn5 = nn.BatchNorm1d(64, momentum=0.1)
+
+ self.bnt = nn.BatchNorm1d(1024, momentum=0.1)
+ self.bnc = nn.BatchNorm1d(64, momentum=0.1)
+
+ self.bn6 = nn.BatchNorm1d(256, momentum=0.1)
+ self.bn7 = nn.BatchNorm1d(256, momentum=0.1)
+ self.bn8 = nn.BatchNorm1d(128, momentum=0.1)
+
+ self.conv1 = nn.Sequential(nn.Conv2d(6, 64, kernel_size=1, bias=True),
+ nn.BatchNorm2d(64, momentum=0.1))
+
+ self.convt = nn.Sequential(nn.Conv1d(64*5, 1024, kernel_size=1, bias=False),
+ self.bnt)
+ self.convc = nn.Sequential(nn.Conv1d(16, 64, kernel_size=1, bias=False),
+ self.bnc)
+
+ self.conv6 = nn.Sequential(nn.Conv1d(1088+64*5, 256, kernel_size=1, bias=False),
+ self.bn6)
+ # self.dp1 = nn.Dropout(p=args.get('dropout', 0.4))
+ self.dp1 = nn.Dropout(p=0.4)
+ self.conv7 = nn.Sequential(nn.Conv1d(256, 256, kernel_size=1, bias=False),
+ self.bn7)
+ # self.dp2 = nn.Dropout(p=args.get('dropout', 0.4))
+ self.dp2 = nn.Dropout(p=0.4)
+ self.conv8 = nn.Sequential(nn.Conv1d(256, 128, kernel_size=1, bias=False),
+ self.bn8)
+ self.conv9 = nn.Conv1d(128, num_part, kernel_size=1, bias=True)
+
+ def forward(self, x, cls_label, gt=None):
+ B, C, N = x.size()
+ idx, _ = knn(x, k=self.k) # different with DGCNN, the knn search is only in 3D space
+ xyz = get_scorenet_input(x, k=self.k, idx=idx) # ScoreNet input
+ #################
+ # use MLP at the 1st layer, same with DGCNN
+ x = get_graph_feature(x, k=self.k, idx=idx)
+ x = x.permute(0, 3, 1, 2) # b,2cin,n,k
+ x = F.relu(self.conv1(x))
+ x1 = x.max(dim=-1, keepdim=False)[0]
+ #################
+ # replace the last 4 DGCNN-EdgeConv with PAConv:
+ """CUDA implementation of PAConv: (presented in the supplementary material of the paper)"""
+ """feature transformation:"""
+ x2, center2 = feat_trans_dgcnn(point_input=x1, kernel=self.matrice2, m=self.m2)
+ score2 = self.scorenet2(xyz, calc_scores=self.calc_scores, bias=0)
+ """assemble with scores:"""
+ x = assemble_dgcnn(score=score2, point_input=x2, center_input=center2, knn_idx=idx, aggregate='sum')
+ x2 = F.relu(self.bn2(x))
+
+ x3, center3 = feat_trans_dgcnn(point_input=x2, kernel=self.matrice3, m=self.m3)
+ score3 = self.scorenet3(xyz, calc_scores=self.calc_scores, bias=0)
+ x = assemble_dgcnn(score=score3, point_input=x3, center_input=center3, knn_idx=idx, aggregate='sum')
+ x3 = F.relu(self.bn3(x))
+
+ x4, center4 = feat_trans_dgcnn(point_input=x3, kernel=self.matrice4, m=self.m4)
+ score4 = self.scorenet4(xyz, calc_scores=self.calc_scores, bias=0)
+ x = assemble_dgcnn(score=score4, point_input=x4, center_input=center4, knn_idx=idx, aggregate='sum')
+ x4 = F.relu(self.bn4(x))
+
+ x5, center5 = feat_trans_dgcnn(point_input=x4, kernel=self.matrice5, m=self.m5)
+ score5 = self.scorenet5(xyz, calc_scores=self.calc_scores, bias=0)
+ x = assemble_dgcnn(score=score5, point_input=x5, center_input=center5, knn_idx=idx, aggregate='sum')
+ x5 = F.relu(self.bn5(x))
+ ###############
+ xx = torch.cat((x1, x2, x3, x4, x5), dim=1)
+
+ xc = F.relu(self.convt(xx))
+ xc = F.adaptive_max_pool1d(xc, 1).view(B, -1)
+
+ cls_label = cls_label.view(B, 16, 1)
+ cls_label = F.relu(self.convc(cls_label))
+ cls = torch.cat((xc.view(B, 1024, 1), cls_label), dim=1)
+ cls = cls.repeat(1, 1, N) # B,1088,N
+
+ x = torch.cat((xx, cls), dim=1) # 1088+64*3
+ x = F.relu(self.conv6(x))
+ x = self.dp1(x)
+ x = F.relu(self.conv7(x))
+ x = self.dp2(x)
+ x = F.relu(self.conv8(x))
+ x = self.conv9(x)
+ x = F.log_softmax(x, dim=1)
+ x = x.permute(0, 2, 1) # b,n,50
+
+ if gt is not None:
+ return x, F.nll_loss(x.contiguous().view(-1, self.num_part), gt.view(-1, 1)[:, 0])
+ else:
+ return x
diff --git a/zoo/PAConv/part_seg/model/DGCNN_PAConv_vote.py b/zoo/PAConv/part_seg/model/DGCNN_PAConv_vote.py
new file mode 100644
index 0000000..9062d96
--- /dev/null
+++ b/zoo/PAConv/part_seg/model/DGCNN_PAConv_vote.py
@@ -0,0 +1,138 @@
+"""
+Embed PAConv into DGCNN
+(only for voting during test phase)
+"""
+
+import torch
+import torch.nn as nn
+import torch.nn.functional as F
+from util.PAConv_util import knn, get_graph_feature, get_scorenet_input, feat_trans_dgcnn, ScoreNet
+from cuda_lib.functional import assign_score_withk as assemble_dgcnn
+
+
+class PAConv(nn.Module):
+ def __init__(self, args, num_part):
+ super(PAConv, self).__init__()
+ # baseline args:
+ self.args = args
+ self.num_part = num_part
+ # PAConv args:
+ self.k = args.get('k_neighbors', 30)
+ self.calc_scores = args.get('calc_scores', 'softmax')
+ self.hidden = args.get('hidden', [[16], [16], [16], [16]]) # the hidden layers of ScoreNet
+
+ self.m2, self.m3, self.m4, self.m5 = args.get('num_matrices', [8, 8, 8, 8])
+ self.scorenet2 = ScoreNet(10, self.m2, hidden_unit=self.hidden[0])
+ self.scorenet3 = ScoreNet(10, self.m3, hidden_unit=self.hidden[1])
+ self.scorenet4 = ScoreNet(10, self.m4, hidden_unit=self.hidden[2])
+ self.scorenet5 = ScoreNet(10, self.m5, hidden_unit=self.hidden[3])
+
+ i2 = 64 # channel dim of input_2nd
+ o2 = i3 = 64 # channel dim of output_2st and input_3rd
+ o3 = i4 = 64 # channel dim of output_3rd and input_4th
+ o4 = i5 = 64 # channel dim of output_4th and input_5th
+ o5 = 64 # channel dim of output_5th
+
+ tensor2 = nn.init.kaiming_normal_(torch.empty(self.m2, i2 * 2, o2), nonlinearity='relu') \
+ .permute(1, 0, 2).contiguous().view(i2 * 2, self.m2 * o2)
+ tensor3 = nn.init.kaiming_normal_(torch.empty(self.m3, i3 * 2, o3), nonlinearity='relu') \
+ .permute(1, 0, 2).contiguous().view(i3 * 2, self.m3 * o3)
+ tensor4 = nn.init.kaiming_normal_(torch.empty(self.m4, i4 * 2, o4), nonlinearity='relu') \
+ .permute(1, 0, 2).contiguous().view(i4 * 2, self.m4 * o4)
+ tensor5 = nn.init.kaiming_normal_(torch.empty(self.m5, i5 * 2, o5), nonlinearity='relu') \
+ .permute(1, 0, 2).contiguous().view(i4 * 2, self.m5 * o5)
+
+ self.matrice2 = nn.Parameter(tensor2, requires_grad=True)
+ self.matrice3 = nn.Parameter(tensor3, requires_grad=True)
+ self.matrice4 = nn.Parameter(tensor4, requires_grad=True)
+ self.matrice5 = nn.Parameter(tensor5, requires_grad=True)
+
+ self.bn2 = nn.BatchNorm1d(64, momentum=0.1)
+ self.bn3 = nn.BatchNorm1d(64, momentum=0.1)
+ self.bn4 = nn.BatchNorm1d(64, momentum=0.1)
+ self.bn5 = nn.BatchNorm1d(64, momentum=0.1)
+
+ self.bnt = nn.BatchNorm1d(1024, momentum=0.1)
+ self.bnc = nn.BatchNorm1d(64, momentum=0.1)
+
+ self.bn6 = nn.BatchNorm1d(256, momentum=0.1)
+ self.bn7 = nn.BatchNorm1d(256, momentum=0.1)
+ self.bn8 = nn.BatchNorm1d(128, momentum=0.1)
+
+ self.conv1 = nn.Sequential(nn.Conv2d(6, 64, kernel_size=1, bias=True),
+ nn.BatchNorm2d(64, momentum=0.1))
+
+ self.convt = nn.Sequential(nn.Conv1d(64*5, 1024, kernel_size=1, bias=False),
+ self.bnt)
+ self.convc = nn.Sequential(nn.Conv1d(16, 64, kernel_size=1, bias=False),
+ self.bnc)
+
+ self.conv6 = nn.Sequential(nn.Conv1d(1088+64*5, 256, kernel_size=1, bias=False),
+ self.bn6)
+ self.dp1 = nn.Dropout(p=args.get('dropout', 0.4))
+ self.conv7 = nn.Sequential(nn.Conv1d(256, 256, kernel_size=1, bias=False),
+ self.bn7)
+ self.dp2 = nn.Dropout(p=args.get('dropout', 0.4))
+ self.conv8 = nn.Sequential(nn.Conv1d(256, 128, kernel_size=1, bias=False),
+ self.bn8)
+ self.conv9 = nn.Conv1d(128, num_part, kernel_size=1, bias=True)
+
+ def forward(self, xyz, x, norm_plt, cls_label, gt=None):
+ B, C, N = x.size()
+ idx, _ = knn(xyz, k=self.k) # different with DGCNN, the knn search is only in 3D space
+ xyz = get_scorenet_input(xyz, k=self.k, idx=idx) # ScoreNet input
+ #################
+ # use MLP at the 1st layer, same with DGCNN
+ x = get_graph_feature(x, k=self.k, idx=idx)
+ x = x.permute(0, 3, 1, 2) # b,2cin,n,k
+ x = F.relu(self.conv1(x))
+ x1 = x.max(dim=-1, keepdim=False)[0]
+ #################
+ # replace the last 4 DGCNN-EdgeConv with PAConv:
+ """CUDA implementation of PAConv: (presented in the supplementary material of the paper)"""
+ """feature transformation:"""
+ x2, center2 = feat_trans_dgcnn(point_input=x1, kernel=self.matrice2, m=self.m2)
+ score2 = self.scorenet2(xyz, calc_scores=self.calc_scores, bias=0)
+ """assemble with scores:"""
+ x = assemble_dgcnn(score=score2, point_input=x2, center_input=center2, knn_idx=idx, aggregate='sum')
+ x2 = F.relu(self.bn2(x))
+
+ x3, center3 = feat_trans_dgcnn(point_input=x2, kernel=self.matrice3, m=self.m3)
+ score3 = self.scorenet3(xyz, calc_scores=self.calc_scores, bias=0)
+ x = assemble_dgcnn(score=score3, point_input=x3, center_input=center3, knn_idx=idx, aggregate='sum')
+ x3 = F.relu(self.bn3(x))
+
+ x4, center4 = feat_trans_dgcnn(point_input=x3, kernel=self.matrice4, m=self.m4)
+ score4 = self.scorenet4(xyz, calc_scores=self.calc_scores, bias=0)
+ x = assemble_dgcnn(score=score4, point_input=x4, center_input=center4, knn_idx=idx, aggregate='sum')
+ x4 = F.relu(self.bn4(x))
+
+ x5, center5 = feat_trans_dgcnn(point_input=x4, kernel=self.matrice5, m=self.m5)
+ score5 = self.scorenet5(xyz, calc_scores=self.calc_scores, bias=0)
+ x = assemble_dgcnn(score=score5, point_input=x5, center_input=center5, knn_idx=idx, aggregate='sum')
+ x5 = F.relu(self.bn5(x))
+ ###############
+ xx = torch.cat((x1, x2, x3, x4, x5), dim=1)
+
+ xc = F.relu(self.convt(xx))
+ xc = F.adaptive_max_pool1d(xc, 1).view(B, -1)
+
+ cls_label = cls_label.view(B, 16, 1)
+ cls_label = F.relu(self.convc(cls_label))
+ cls = torch.cat((xc.view(B, 1024, 1), cls_label), dim=1)
+ cls = cls.repeat(1, 1, N) # B,1088,N
+
+ x = torch.cat((xx, cls), dim=1) # 1088+64*3
+ x = F.relu(self.conv6(x))
+ x = self.dp1(x)
+ x = F.relu(self.conv7(x))
+ x = self.dp2(x)
+ x = F.relu(self.conv8(x))
+ x = self.conv9(x)
+ # x = F.log_softmax(x, dim=1) no need to get the softmax here, which is accumulated in voting
+ x = x.permute(0, 2, 1) # b,n,50
+
+ if gt is not None:
+ return x, F.nll_loss(x.contiguous().view(-1, self.num_part), gt.view(-1, 1)[:, 0])
+ else:
+ return x
diff --git a/zoo/PAConv/part_seg/requirements.txt b/zoo/PAConv/part_seg/requirements.txt
new file mode 100644
index 0000000..04920b3
--- /dev/null
+++ b/zoo/PAConv/part_seg/requirements.txt
@@ -0,0 +1,16 @@
+git+git://github.com/imankgoyal/etw_pytorch_utils.git@v1.1.1#egg=etw_pytorch_utils
+enum34
+future
+h5py==2.10.0
+progressbar2==3.50.0
+tensorboardX==2.0
+-f https://download.pytorch.org/whl/torch_stable.html
+torch==1.5.0+cu101
+-f https://download.pytorch.org/whl/torch_stable.html
+torchvision==0.6.0+cu101
+yacs==0.1.6
+gdown==4.2.0
+
+
+# ninja
+# conda install -c 3dhubs gcc-5
\ No newline at end of file
diff --git a/zoo/PAConv/part_seg/test.py b/zoo/PAConv/part_seg/test.py
new file mode 100644
index 0000000..be6605e
--- /dev/null
+++ b/zoo/PAConv/part_seg/test.py
@@ -0,0 +1,247 @@
+from __future__ import print_function
+import os
+import argparse
+import torch
+from util.data_util import PartNormalDataset, ShapeNetC
+import torch.nn.functional as F
+from model.DGCNN_PAConv2 import PAConv
+import numpy as np
+from torch.utils.data import DataLoader
+from util.util import to_categorical, compute_overall_iou, load_cfg_from_cfg_file, merge_cfg_from_list, IOStream
+from tqdm import tqdm
+from collections import defaultdict
+from torch.autograd import Variable
+import random
+
+
+classes_str = ['aero','bag','cap','car','chair','ear','guitar','knife','lamp','lapt','moto','mug','Pistol','rock','stake','table']
+
+EVAL = True
+exp_name = 'dgcnn_paconv_train_2'
+manual_seed = 0
+workers = 6
+no_cuda = False # cpu
+test_batch_size = 16
+model_type = 'insiou'
+
+
+def get_parser():
+ parser = argparse.ArgumentParser(description='3D Shape Part Segmentation')
+ parser.add_argument('--config', type=str, default='dgcnn_paconv_train.yaml', help='config file')
+ parser.add_argument('opts', help='see config/dgcnn_paconv_train.yaml for all options', default=None, nargs=argparse.REMAINDER)
+ args = parser.parse_args()
+ assert args.config is not None
+ cfg = load_cfg_from_cfg_file(args.config)
+ if args.opts is not None:
+ cfg = merge_cfg_from_list(cfg, args.opts)
+
+ cfg['manual_seed'] = cfg.get('manual_seed', 0)
+ cfg['workers'] = cfg.get('workers', 6)
+ return cfg
+
+
+def _init_():
+ if not os.path.exists('checkpoints'):
+ os.makedirs('checkpoints')
+ if not os.path.exists('checkpoints/' + exp_name):
+ os.makedirs('checkpoints/' + exp_name)
+
+ # global writer
+ # writer = SummaryWriter('checkpoints/' + args.exp_name)
+
+
+def weight_init(m):
+ if isinstance(m, torch.nn.Linear):
+ torch.nn.init.kaiming_normal_(m.weight)
+ if m.bias is not None:
+ torch.nn.init.constant_(m.bias, 0)
+ elif isinstance(m, torch.nn.Conv2d):
+ torch.nn.init.kaiming_normal_(m.weight)
+ if m.bias is not None:
+ torch.nn.init.constant_(m.bias, 0)
+ elif isinstance(m, torch.nn.Conv1d):
+ torch.nn.init.kaiming_normal_(m.weight)
+ if m.bias is not None:
+ torch.nn.init.constant_(m.bias, 0)
+ elif isinstance(m, torch.nn.BatchNorm2d):
+ torch.nn.init.constant_(m.weight, 1)
+ torch.nn.init.constant_(m.bias, 0)
+ elif isinstance(m, torch.nn.BatchNorm1d):
+ torch.nn.init.constant_(m.weight, 1)
+ torch.nn.init.constant_(m.bias, 0)
+
+
+
+def test_epoch(test_loader, model, epoch, num_part, num_classes, io):
+ test_loss = 0.0
+ count = 0.0
+ accuracy = []
+ shape_ious = 0.0
+ final_total_per_cat_iou = np.zeros(16).astype(np.float32)
+ final_total_per_cat_seen = np.zeros(16).astype(np.int32)
+ metrics = defaultdict(lambda: list())
+ model.eval()
+
+ # label_size: b, means each sample has one corresponding class
+ for batch_id, (points, label, target, norm_plt) in tqdm(enumerate(test_loader), total=len(test_loader), smoothing=0.9):
+ batch_size, num_point, _ = points.size()
+ points, label, target, norm_plt = Variable(points.float()), Variable(label.long()), Variable(target.long()), Variable(norm_plt.float())
+ points = points.transpose(2, 1)
+ norm_plt = norm_plt.transpose(2, 1)
+ points, label, target, norm_plt = points.cuda(non_blocking=True), label.squeeze(1).cuda(non_blocking=True), target.cuda(non_blocking=True), norm_plt.cuda(non_blocking=True)
+ seg_pred = model(points, norm_plt, to_categorical(label, num_classes)) # b,n,50
+
+ # instance iou without considering the class average at each batch_size:
+ batch_shapeious = compute_overall_iou(seg_pred, target, num_part) # [b]
+
+ # per category iou at each batch_size:
+ for shape_idx in range(seg_pred.size(0)): # sample_idx
+ cur_gt_label = label[shape_idx] # label[sample_idx], denotes current sample belongs to which cat
+ final_total_per_cat_iou[cur_gt_label] += batch_shapeious[shape_idx] # add the iou belongs to this cat
+ final_total_per_cat_seen[cur_gt_label] += 1 # count the number of this cat is chosen
+
+ # total iou of current batch in each process:
+ batch_ious = seg_pred.new_tensor([np.sum(batch_shapeious)], dtype=torch.float64) # same device with seg_pred!!!
+
+ # prepare seg_pred and target for later calculating loss and acc:
+ seg_pred = seg_pred.contiguous().view(-1, num_part)
+ target = target.view(-1, 1)[:, 0]
+ # Loss
+ loss = F.nll_loss(seg_pred.contiguous(), target.contiguous())
+
+ # accuracy:
+ pred_choice = seg_pred.data.max(1)[1] # b*n
+ correct = pred_choice.eq(target.data).sum() # torch.int64: total number of correct-predict pts
+
+ loss = torch.mean(loss)
+ shape_ious += batch_ious.item() # count the sum of ious in each iteration
+ count += batch_size # count the total number of samples in each iteration
+ test_loss += loss.item() * batch_size
+ accuracy.append(correct.item() / (batch_size * num_point)) # append the accuracy of each iteration
+
+ for cat_idx in range(16):
+ if final_total_per_cat_seen[cat_idx] > 0: # indicating this cat is included during previous iou appending
+ final_total_per_cat_iou[cat_idx] = final_total_per_cat_iou[cat_idx] / final_total_per_cat_seen[cat_idx] # avg class iou across all samples
+
+ metrics['accuracy'] = np.mean(accuracy)
+ metrics['shape_avg_iou'] = shape_ious * 1.0 / count
+
+ outstr = 'Test %d, loss: %f, test acc: %f test ins_iou: %f' % (epoch + 1, test_loss * 1.0 / count,
+ metrics['accuracy'], metrics['shape_avg_iou'])
+
+ io.cprint(outstr)
+ # Write to tensorboard
+ # writer.add_scalar('loss_train', test_loss * 1.0 / count, epoch + 1)
+ # writer.add_scalar('Acc_train', metrics['accuracy'], epoch + 1)
+ # writer.add_scalar('ins_iou', metrics['shape_avg_iou'])
+
+ return metrics, final_total_per_cat_iou
+
+
+def test(io):
+ # Dataloader
+ # test_data = PartNormalDataset(npoints=2048, split='test', normalize=False)
+ test_data = ShapeNetC(partition='shapenet-c', sub='jitter_4', class_choice=None)
+ print("The number of test data is: {}".format(len(test_data)))
+
+ test_loader = DataLoader(test_data, batch_size=test_batch_size, shuffle=False, num_workers=workers, drop_last=False)
+
+ # Try to load models
+ num_part = 50
+ device = torch.device("cuda" if cuda else "cpu")
+
+ model = PAConv(num_part).to(device)
+ # io.cprint(str(model))
+
+ from collections import OrderedDict
+ state_dict = torch.load("checkpoints/%s/best_%s_model.pth" % (exp_name, model_type),
+ map_location=torch.device('cpu'))['model']
+
+ new_state_dict = OrderedDict()
+ for layer in state_dict:
+ new_state_dict[layer.replace('module.', '')] = state_dict[layer]
+ model.load_state_dict(new_state_dict)
+
+ model.eval()
+ num_part = 50
+ num_classes = 16
+ metrics = defaultdict(lambda: list())
+ hist_acc = []
+ shape_ious = []
+ total_per_cat_iou = np.zeros((16)).astype(np.float32)
+ total_per_cat_seen = np.zeros((16)).astype(np.int32)
+
+ for batch_id, (points, label, target) in tqdm(enumerate(test_loader), total=len(test_loader), smoothing=0.9):
+ batch_size, num_point, _ = points.size()
+ points, label, target = Variable(points.float()), Variable(label.long()), Variable(target.long())
+ points = points.transpose(2, 1)
+ points, label, target = points.cuda(non_blocking=True), label.squeeze().cuda(non_blocking=True), target.cuda(non_blocking=True)
+
+ with torch.no_grad():
+ seg_pred = model(points, to_categorical(label, num_classes)) # b,n,50
+
+ # instance iou without considering the class average at each batch_size:
+ batch_shapeious = compute_overall_iou(seg_pred, target, num_part) # [b]
+ shape_ious += batch_shapeious # iou +=, equals to .append
+
+ # per category iou at each batch_size:
+ for shape_idx in range(seg_pred.size(0)): # sample_idx
+ cur_gt_label = label[shape_idx] # label[sample_idx]
+ total_per_cat_iou[cur_gt_label] += batch_shapeious[shape_idx]
+ total_per_cat_seen[cur_gt_label] += 1
+
+ # accuracy:
+ seg_pred = seg_pred.contiguous().view(-1, num_part)
+ target = target.view(-1, 1)[:, 0]
+ pred_choice = seg_pred.data.max(1)[1]
+ correct = pred_choice.eq(target.data).cpu().sum()
+ metrics['accuracy'].append(correct.item() / (batch_size * num_point))
+
+ hist_acc += metrics['accuracy']
+ metrics['accuracy'] = np.mean(hist_acc)
+ metrics['shape_avg_iou'] = np.mean(shape_ious)
+ for cat_idx in range(16):
+ if total_per_cat_seen[cat_idx] > 0:
+ total_per_cat_iou[cat_idx] = total_per_cat_iou[cat_idx] / total_per_cat_seen[cat_idx]
+
+ # First we need to calculate the iou of each class and the avg class iou:
+ class_iou = 0
+ for cat_idx in range(16):
+ class_iou += total_per_cat_iou[cat_idx]
+ io.cprint(classes_str[cat_idx] + ' iou: ' + str(total_per_cat_iou[cat_idx])) # print the iou of each class
+ avg_class_iou = class_iou / 16
+ outstr = 'Test :: test acc: %f test class mIOU: %f, test instance mIOU: %f' % (metrics['accuracy'], avg_class_iou, metrics['shape_avg_iou'])
+ io.cprint(outstr)
+
+
+if __name__ == "__main__":
+ # args = get_parser()
+ _init_()
+
+ if not EVAL:
+ io = IOStream('checkpoints/' + exp_name + '/%s_train.log' % (exp_name))
+ else:
+ io = IOStream('checkpoints/' + exp_name + '/%s_test.log' % (exp_name))
+ # io.cprint(str(args))
+
+ if manual_seed is not None:
+ random.seed(manual_seed)
+ np.random.seed(manual_seed)
+ torch.manual_seed(manual_seed)
+
+ cuda = not no_cuda and torch.cuda.is_available()
+
+ if cuda:
+ io.cprint('Using GPU')
+ if manual_seed is not None:
+ torch.cuda.manual_seed(manual_seed)
+ torch.cuda.manual_seed_all(manual_seed)
+ else:
+ io.cprint('Using CPU')
+
+ if not EVAL:
+ # train(args, io)
+ pass
+ else:
+ test(io)
+
diff --git a/zoo/PAConv/part_seg/test.sh b/zoo/PAConv/part_seg/test.sh
new file mode 100644
index 0000000..ea59cb0
--- /dev/null
+++ b/zoo/PAConv/part_seg/test.sh
@@ -0,0 +1,2 @@
+CUDA_VISIBLE_DEVICES=2 python test.py \
+ --config config/dgcnn_paconv_test.yaml
\ No newline at end of file
diff --git a/zoo/PAConv/part_seg/test_voting.sh b/zoo/PAConv/part_seg/test_voting.sh
new file mode 100644
index 0000000..7b850eb
--- /dev/null
+++ b/zoo/PAConv/part_seg/test_voting.sh
@@ -0,0 +1,2 @@
+CUDA_VISIBLE_DEVICES=7 python eval_voting.py \
+ --config config/dgcnn_paconv_test.yaml
\ No newline at end of file
diff --git a/zoo/PAConv/part_seg/train_ddp.sh b/zoo/PAConv/part_seg/train_ddp.sh
new file mode 100644
index 0000000..c10dcfa
--- /dev/null
+++ b/zoo/PAConv/part_seg/train_ddp.sh
@@ -0,0 +1 @@
+CUDA_VISIBLE_DEVICES=4,5,6,7 python main_ddp.py --config config/dgcnn_paconv_train.yaml
\ No newline at end of file
diff --git a/zoo/PAConv/part_seg/train_dp.sh b/zoo/PAConv/part_seg/train_dp.sh
new file mode 100644
index 0000000..f45d65a
--- /dev/null
+++ b/zoo/PAConv/part_seg/train_dp.sh
@@ -0,0 +1,2 @@
+CUDA_VISIBLE_DEVICES=7 python main.py \
+ --config config/dgcnn_paconv_train.yaml
\ No newline at end of file
diff --git a/zoo/PAConv/part_seg/util/PAConv_util.py b/zoo/PAConv/part_seg/util/PAConv_util.py
new file mode 100644
index 0000000..d5700a0
--- /dev/null
+++ b/zoo/PAConv/part_seg/util/PAConv_util.py
@@ -0,0 +1,143 @@
+import torch
+import torch.nn as nn
+import torch.nn.functional as F
+
+
+def knn(x, k):
+ B, _, N = x.size()
+ inner = -2 * torch.matmul(x.transpose(2, 1), x)
+ xx = torch.sum(x ** 2, dim=1, keepdim=True)
+ pairwise_distance = -xx - inner - xx.transpose(2, 1)
+
+ _, idx = pairwise_distance.topk(k=k, dim=-1) # (batch_size, num_points, k)
+
+ return idx, pairwise_distance
+
+
+def get_graph_feature(x, k, idx):
+ """original function in DGCNN"""
+ batch_size = x.size(0)
+ num_points = x.size(2)
+ x = x.view(batch_size, -1, num_points)
+
+ idx_base = torch.arange(0, batch_size, device=x.device).view(-1, 1, 1) * num_points
+
+ idx = idx + idx_base
+
+ idx = idx.view(-1)
+
+ _, num_dims, _ = x.size()
+
+ x = x.transpose(2, 1).contiguous()
+
+ neighbor = x.view(batch_size * num_points, -1)[idx, :]
+
+ neighbor = neighbor.view(batch_size, num_points, k, num_dims)
+
+ x = x.view(batch_size, num_points, 1, num_dims).repeat(1, 1, k, 1)
+
+ feature = torch.cat((neighbor - x, neighbor), dim=3) # (xj-xi, xj): b,n,k,2c
+
+ return feature
+
+
+def get_ed(x, y):
+ """calculate the Euclidean distance between two points"""
+ ed = torch.norm(x - y, dim=-1).reshape(x.shape[0], 1)
+ return ed
+
+
+def get_scorenet_input(x, k, idx):
+ """xyz=(center, neighbor, neighbor-center, ed)"""
+ batch_size = x.size(0)
+ num_points = x.size(2)
+ x = x.view(batch_size, -1, num_points)
+
+ device = torch.device('cuda')
+
+ idx_base = torch.arange(0, batch_size, device=device).view(-1, 1, 1) * num_points
+
+ idx = idx + idx_base
+
+ idx = idx.view(-1)
+
+ _, num_dims, _ = x.size()
+
+ x = x.transpose(2, 1).contiguous()
+
+ neighbor = x.view(batch_size * num_points, -1)[idx, :]
+
+ x = x.view(batch_size, num_points, 1, num_dims).repeat(1, 1, k, 1)
+
+ x1 = x.view(batch_size * num_points * k, -1) # x1 only for calculating Euclidean distance
+ ed = get_ed(x1, neighbor).view(batch_size, num_points, k, 1)
+
+ neighbor = neighbor.view(batch_size, num_points, k, num_dims)
+
+ xyz = torch.cat((x, neighbor, neighbor - x, ed), dim=3).permute(0, 3, 1, 2) # b,10,n,k
+
+ return xyz
+
+
+def feat_trans_dgcnn(point_input, kernel, m):
+ """transforming features using weight matrices"""
+ # following get_graph_feature in DGCNN: torch.cat((neighbor - center, neighbor), dim=3)
+ B, _, N = point_input.size() # b, 2cin, n
+ point_output = torch.matmul(point_input.permute(0, 2, 1).repeat(1, 1, 2), kernel).view(B, N, m, -1) # b,n,m,cout
+ center_output = torch.matmul(point_input.permute(0, 2, 1), kernel[:point_input.size(1)]).view(B, N, m, -1) # b,n,m,cout
+ return point_output, center_output
+
+
+class ScoreNet(nn.Module):
+
+ def __init__(self, in_channel, out_channel, hidden_unit=[16], last_bn=False):
+ super(ScoreNet, self).__init__()
+ self.hidden_unit = hidden_unit
+ self.last_bn = last_bn
+ self.mlp_convs_hidden = nn.ModuleList()
+ self.mlp_bns_hidden = nn.ModuleList()
+
+ if hidden_unit is None or len(hidden_unit) == 0:
+ self.mlp_convs_nohidden = nn.Conv2d(in_channel, out_channel, 1, bias=not last_bn)
+ if self.last_bn:
+ self.mlp_bns_nohidden = nn.BatchNorm2d(out_channel)
+
+ else:
+ self.mlp_convs_hidden.append(nn.Conv2d(in_channel, hidden_unit[0], 1, bias=False)) # from in_channel to first hidden
+ self.mlp_bns_hidden.append(nn.BatchNorm2d(hidden_unit[0]))
+ for i in range(1, len(hidden_unit)): # from 2nd hidden to next hidden to last hidden
+ self.mlp_convs_hidden.append(nn.Conv2d(hidden_unit[i - 1], hidden_unit[i], 1, bias=False))
+ self.mlp_bns_hidden.append(nn.BatchNorm2d(hidden_unit[i]))
+ self.mlp_convs_hidden.append(nn.Conv2d(hidden_unit[-1], out_channel, 1, bias=not last_bn)) # from last hidden to out_channel
+ self.mlp_bns_hidden.append(nn.BatchNorm2d(out_channel))
+
+ def forward(self, xyz, calc_scores='softmax', bias=0):
+ B, _, N, K = xyz.size()
+ scores = xyz
+
+ if self.hidden_unit is None or len(self.hidden_unit) == 0:
+ if self.last_bn:
+ scores = self.mlp_bns_nohidden(self.mlp_convs_nohidden(scores))
+ else:
+ scores = self.mlp_convs_nohidden(scores)
+
+ else:
+ for i, conv in enumerate(self.mlp_convs_hidden):
+ if i == len(self.mlp_convs_hidden)-1: # if the output layer, no ReLU
+ if self.last_bn:
+ bn = self.mlp_bns_hidden[i]
+ scores = bn(conv(scores))
+ else:
+ scores = conv(scores)
+ else:
+ bn = self.mlp_bns_hidden[i]
+ scores = F.relu(bn(conv(scores)))
+
+ if calc_scores == 'softmax':
+ scores = F.softmax(scores, dim=1)+bias # B*m*N*K
+ elif calc_scores == 'sigmoid':
+ scores = torch.sigmoid(scores)+bias # B*m*N*K
+ else:
+ raise ValueError('Not Implemented!')
+
+ return scores.permute(0, 2, 3, 1) # B*N*K*m
diff --git a/zoo/PAConv/part_seg/util/data_util.py b/zoo/PAConv/part_seg/util/data_util.py
new file mode 100644
index 0000000..01d5b77
--- /dev/null
+++ b/zoo/PAConv/part_seg/util/data_util.py
@@ -0,0 +1,281 @@
+import cv2
+import glob
+import h5py
+
+import os
+import json
+import warnings
+import numpy as np
+from torch.utils.data import Dataset
+warnings.filterwarnings('ignore')
+
+
+def pc_normalize(pc):
+ centroid = np.mean(pc, axis=0)
+ pc = pc - centroid
+ m = np.max(np.sqrt(np.sum(pc ** 2, axis=1)))
+ pc = pc / m
+ return pc
+
+
+class PartNormalDataset(Dataset):
+ def __init__(self, npoints=2500, split='train', normalize=False):
+ self.npoints = npoints
+ self.root = '/mnt/lustre/share/ldkong/data/sets/ShapeNetPart'
+ self.catfile = os.path.join(self.root, 'synsetoffset2category.txt')
+ self.cat = {}
+ self.normalize = normalize
+
+ with open(self.catfile, 'r') as f:
+ for line in f:
+ ls = line.strip().split()
+ self.cat[ls[0]] = ls[1]
+ self.cat = {k: v for k, v in self.cat.items()}
+
+ self.meta = {}
+ with open(os.path.join(self.root, 'train_test_split', 'shuffled_train_file_list.json'), 'r') as f:
+ train_ids = set([str(d.split('/')[2]) for d in json.load(f)])
+ with open(os.path.join(self.root, 'train_test_split', 'shuffled_val_file_list.json'), 'r') as f:
+ val_ids = set([str(d.split('/')[2]) for d in json.load(f)])
+ with open(os.path.join(self.root, 'train_test_split', 'shuffled_test_file_list.json'), 'r') as f:
+ test_ids = set([str(d.split('/')[2]) for d in json.load(f)])
+ for item in self.cat:
+ self.meta[item] = []
+ dir_point = os.path.join(self.root, self.cat[item])
+ fns = sorted(os.listdir(dir_point))
+
+ if split == 'trainval':
+ fns = [fn for fn in fns if ((fn[0:-4] in train_ids) or (fn[0:-4] in val_ids))]
+ elif split == 'train':
+ fns = [fn for fn in fns if fn[0:-4] in train_ids]
+ elif split == 'val':
+ fns = [fn for fn in fns if fn[0:-4] in val_ids]
+ elif split == 'test':
+ fns = [fn for fn in fns if fn[0:-4] in test_ids]
+ else:
+ print('Unknown split: %s. Exiting..' % (split))
+ exit(-1)
+
+ for fn in fns:
+ token = (os.path.splitext(os.path.basename(fn))[0])
+ self.meta[item].append(os.path.join(dir_point, token + '.txt'))
+
+ self.datapath = []
+ for item in self.cat:
+ for fn in self.meta[item]:
+ self.datapath.append((item, fn))
+
+ self.classes = dict(zip(self.cat, range(len(self.cat))))
+ # Mapping from category ('Chair') to a list of int [10,11,12,13] as segmentation labels
+ self.seg_classes = {'Earphone': [16, 17, 18], 'Motorbike': [30, 31, 32, 33, 34, 35], 'Rocket': [41, 42, 43],
+ 'Car': [8, 9, 10, 11], 'Laptop': [28, 29], 'Cap': [6, 7], 'Skateboard': [44, 45, 46],
+ 'Mug': [36, 37], 'Guitar': [19, 20, 21], 'Bag': [4, 5], 'Lamp': [24, 25, 26, 27],
+ 'Table': [47, 48, 49], 'Airplane': [0, 1, 2, 3], 'Pistol': [38, 39, 40],
+ 'Chair': [12, 13, 14, 15], 'Knife': [22, 23]}
+
+ self.cache = {} # from index to (point_set, cls, seg) tuple
+ self.cache_size = 20000
+
+ def __getitem__(self, index):
+ if index in self.cache:
+ point_set, normal, seg, cls = self.cache[index]
+ else:
+ fn = self.datapath[index]
+ cat = self.datapath[index][0]
+ cls = self.classes[cat]
+ cls = np.array([cls]).astype(np.int32)
+ data = np.loadtxt(fn[1]).astype(np.float32)
+ point_set = data[:, 0:3]
+ normal = data[:, 3:6]
+ seg = data[:, -1].astype(np.int32)
+ if len(self.cache) < self.cache_size:
+ self.cache[index] = (point_set, normal, seg, cls)
+
+ if self.normalize:
+ point_set = pc_normalize(point_set)
+
+ choice = np.random.choice(len(seg), self.npoints, replace=True)
+
+ # resample
+ # note that the number of points in some points clouds is less than 2048, thus use random.choice
+ # remember to use the same seed during train and test for a getting stable result
+ point_set = point_set[choice, :]
+ seg = seg[choice]
+ normal = normal[choice, :]
+
+ # return point_set, cls, seg, normal
+ return point_set, cls, seg
+
+ def __len__(self):
+ return len(self.datapath)
+
+
+
+class ShapeNetPart(Dataset):
+ def __init__(self, num_points=2048, partition='train', class_choice=None, sub=None):
+ self.data, self.label, self.seg = load_data_partseg(partition, sub)
+ self.cat2id = {'airplane': 0, 'bag': 1, 'cap': 2, 'car': 3, 'chair': 4,
+ 'earphone': 5, 'guitar': 6, 'knife': 7, 'lamp': 8, 'laptop': 9,
+ 'motor': 10, 'mug': 11, 'pistol': 12, 'rocket': 13, 'skateboard': 14, 'table': 15}
+ self.seg_num = [4, 2, 2, 4, 4, 3, 3, 2, 4, 2, 6, 2, 3, 3, 3, 3]
+ self.index_start = [0, 4, 6, 8, 12, 16, 19, 22, 24, 28, 30, 36, 38, 41, 44, 47]
+ self.num_points = num_points
+ self.partition = partition
+ self.class_choice = class_choice
+
+ if self.class_choice != None:
+ id_choice = self.cat2id[self.class_choice]
+ indices = (self.label == id_choice).squeeze()
+ self.data = self.data[indices]
+ self.label = self.label[indices]
+ self.seg = self.seg[indices]
+ self.seg_num_all = self.seg_num[id_choice]
+ self.seg_start_index = self.index_start[id_choice]
+ else:
+ self.seg_num_all = 50
+ self.seg_start_index = 0
+
+ def __getitem__(self, item):
+ pointcloud = self.data[item][:self.num_points]
+ label = self.label[item]
+ seg = self.seg[item][:self.num_points] # part seg label
+ if self.partition == 'trainval':
+ pointcloud = translate_pointcloud(pointcloud)
+ indices = list(range(pointcloud.shape[0]))
+ np.random.shuffle(indices)
+ pointcloud = pointcloud[indices]
+ seg = seg[indices]
+ return pointcloud, label, seg
+
+ def __len__(self):
+ return self.data.shape[0]
+
+
+
+class ShapeNetC(Dataset):
+ def __init__(self, partition='train', class_choice=None, sub=None):
+ self.data, self.label, self.seg = load_data_partseg(partition, sub)
+ self.cat2id = {'airplane': 0, 'bag': 1, 'cap': 2, 'car': 3, 'chair': 4,
+ 'earphone': 5, 'guitar': 6, 'knife': 7, 'lamp': 8, 'laptop': 9,
+ 'motor': 10, 'mug': 11, 'pistol': 12, 'rocket': 13, 'skateboard': 14, 'table': 15}
+ self.seg_num = [4, 2, 2, 4, 4, 3, 3, 2, 4, 2, 6, 2, 3, 3, 3, 3] # number of parts for each category
+ self.index_start = [0, 4, 6, 8, 12, 16, 19, 22, 24, 28, 30, 36, 38, 41, 44, 47]
+ self.partition = partition
+ self.class_choice = class_choice
+ # self.partseg_colors = load_color_partseg()
+
+ if self.class_choice != None:
+ id_choice = self.cat2id[self.class_choice]
+ indices = (self.label == id_choice).squeeze()
+ self.data = self.data[indices]
+ self.label = self.label[indices]
+ self.seg = self.seg[indices]
+ self.seg_num_all = self.seg_num[id_choice]
+ self.seg_start_index = self.index_start[id_choice]
+ else:
+ self.seg_num_all = 50
+ self.seg_start_index = 0
+
+ def __getitem__(self, item):
+ pointcloud = self.data[item]
+ label = self.label[item]
+ seg = self.seg[item] # part seg label
+ if self.partition == 'trainval':
+ pointcloud = translate_pointcloud(pointcloud)
+ indices = list(range(pointcloud.shape[0]))
+ np.random.shuffle(indices)
+ pointcloud = pointcloud[indices]
+ seg = seg[indices]
+ return pointcloud, label, seg
+
+ def __len__(self):
+ return self.data.shape[0]
+
+
+DATA_DIR = '/mnt/lustre/share/ldkong/data/sets/ShapeNetPart'
+SHAPENET_C_DIR = '/mnt/lustre/share/jwren/to_kld/shapenet_c'
+def load_data_partseg(partition, sub=None):
+ all_data = []
+ all_label = []
+ all_seg = []
+ if partition == 'trainval':
+ file = glob.glob(os.path.join(DATA_DIR, 'shapenet_part_seg_hdf5_data', 'hdf5_data', '*train*.h5')) \
+ + glob.glob(os.path.join(DATA_DIR, 'shapenet_part_seg_hdf5_data', 'hdf5_data', '*val*.h5'))
+ elif partition == 'shapenet-c':
+ file = os.path.join(SHAPENET_C_DIR, '%s.h5'%sub)
+ else:
+ file = glob.glob(os.path.join(DATA_DIR, 'shapenet_part_seg_hdf5_data', 'hdf5_data', '*%s*.h5'%partition))
+
+ if partition == 'shapenet-c':
+ # for h5_name in file:
+ # f = h5py.File(h5_name, 'r+')
+ f = h5py.File(file, 'r')
+ data = f['data'][:].astype('float32')
+ label = f['label'][:].astype('int64')
+ seg = f['pid'][:].astype('int64') # part seg label
+ f.close()
+ all_data.append(data)
+ all_label.append(label)
+ all_seg.append(seg)
+ else:
+ for h5_name in file:
+ f = h5py.File(h5_name, 'r')
+ data = f['data'][:].astype('float32')
+ label = f['label'][:].astype('int64')
+ seg = f['pid'][:].astype('int64')
+ f.close()
+ all_data.append(data)
+ all_label.append(label)
+ all_seg.append(seg)
+
+ all_data = np.concatenate(all_data, axis=0)
+ all_label = np.concatenate(all_label, axis=0)
+ all_seg = np.concatenate(all_seg, axis=0)
+ return all_data, all_label, all_seg
+
+
+def load_color_partseg():
+ colors = []
+ labels = []
+ f = open("prepare_data/meta/partseg_colors.txt")
+ for line in json.load(f):
+ colors.append(line['color'])
+ labels.append(line['label'])
+ partseg_colors = np.array(colors)
+ partseg_colors = partseg_colors[:, [2, 1, 0]]
+ partseg_labels = np.array(labels)
+ font = cv2.FONT_HERSHEY_SIMPLEX
+ img_size = 1350
+ img = np.zeros((1350, 1890, 3), dtype="uint8")
+ cv2.rectangle(img, (0, 0), (1900, 1900), [255, 255, 255], thickness=-1)
+ column_numbers = [4, 2, 2, 4, 4, 3, 3, 2, 4, 2, 6, 2, 3, 3, 3, 3]
+ column_gaps = [320, 320, 300, 300, 285, 285]
+ color_size = 64
+ color_index = 0
+ label_index = 0
+ row_index = 16
+ for row in range(0, img_size):
+ column_index = 32
+ for column in range(0, img_size):
+ color = partseg_colors[color_index]
+ label = partseg_labels[label_index]
+ length = len(str(label))
+ cv2.rectangle(img, (column_index, row_index), (column_index + color_size, row_index + color_size), color=(int(color[0]), int(color[1]), int(color[2])), thickness=-1)
+ img = cv2.putText(img, label, (column_index + int(color_size * 1.15), row_index + int(color_size / 2)), font, 0.76, (0, 0, 0), 2)
+ column_index = column_index + column_gaps[column]
+ color_index = color_index + 1
+ label_index = label_index + 1
+ if color_index >= 50:
+ cv2.imwrite("prepare_data/meta/partseg_colors.png", img, [cv2.IMWRITE_PNG_COMPRESSION, 0])
+ return np.array(colors)
+ elif (column + 1 >= column_numbers[row]):
+ break
+ row_index = row_index + int(color_size * 1.3)
+ if (row_index >= img_size):
+ break
+
+def translate_pointcloud(pointcloud):
+ xyz1 = np.random.uniform(low=2./3., high=3./2., size=[3])
+ xyz2 = np.random.uniform(low=-0.2, high=0.2, size=[3])
+ translated_pointcloud = np.add(np.multiply(pointcloud, xyz1), xyz2).astype('float32')
+ return translated_pointcloud
diff --git a/zoo/PAConv/part_seg/util/util.py b/zoo/PAConv/part_seg/util/util.py
new file mode 100644
index 0000000..de53b3c
--- /dev/null
+++ b/zoo/PAConv/part_seg/util/util.py
@@ -0,0 +1,264 @@
+import numpy as np
+import torch
+import torch.nn.functional as F
+
+
+def cal_loss(pred, gold, smoothing=True):
+ ''' Calculate cross entropy loss, apply label smoothing if needed. '''
+
+ gold = gold.contiguous().view(-1)
+
+ if smoothing:
+ eps = 0.2
+ n_class = pred.size(1)
+
+ one_hot = torch.zeros_like(pred).scatter(1, gold.view(-1, 1), 1)
+ one_hot = one_hot * (1 - eps) + (1 - one_hot) * eps / (n_class - 1)
+ log_prb = F.log_softmax(pred, dim=1)
+
+ loss = -(one_hot * log_prb).sum(dim=1).mean()
+ else:
+ loss = F.cross_entropy(pred, gold, reduction='mean')
+
+ return loss
+
+
+def to_categorical(y, num_classes):
+ """ 1-hot encodes a tensor """
+ new_y = torch.eye(num_classes)[y.cpu().data.numpy(),]
+ if (y.is_cuda):
+ return new_y.cuda(non_blocking=True)
+ return new_y
+
+
+def compute_overall_iou(pred, target, num_classes):
+ shape_ious = []
+ pred = pred.max(dim=2)[1] # (batch_size, num_points) the pred_class_idx of each point in each sample
+ pred_np = pred.cpu().data.numpy()
+
+ target_np = target.cpu().data.numpy()
+ for shape_idx in range(pred.size(0)): # sample_idx
+ part_ious = []
+ for part in range(num_classes): # class_idx! no matter which category, only consider all part_classes of all categories, check all 50 classes
+ # for target, each point has a class no matter which category owns this point! also 50 classes!!!
+ # only return 1 when both belongs to this class, which means correct:
+ I = np.sum(np.logical_and(pred_np[shape_idx] == part, target_np[shape_idx] == part))
+ # always return 1 when either is belongs to this class:
+ U = np.sum(np.logical_or(pred_np[shape_idx] == part, target_np[shape_idx] == part))
+
+ F = np.sum(target_np[shape_idx] == part)
+
+ if F != 0:
+ iou = I / float(U) # iou across all points for this class
+ part_ious.append(iou) # append the iou of this class
+ shape_ious.append(np.mean(part_ious)) # each time append an average iou across all classes of this sample (sample_level!)
+ return shape_ious # [batch_size]
+
+
+# create a file and write the text into it
+class IOStream():
+ def __init__(self, path):
+ self.f = open(path, 'a')
+
+ def cprint(self, text):
+ print(text)
+ self.f.write(text+'\n')
+ self.f.flush()
+
+ def close(self):
+ self.f.close()
+
+
+# -----------------------------------------------------------------------------
+# Functions for parsing args
+# -----------------------------------------------------------------------------
+import yaml
+import os
+from ast import literal_eval
+import copy
+
+
+class CfgNode(dict):
+ """
+ CfgNode represents an internal node in the configuration tree. It's a simple
+ dict-like container that allows for attribute-based access to keys.
+ """
+
+ def __init__(self, init_dict=None, key_list=None, new_allowed=False):
+ # Recursively convert nested dictionaries in init_dict into CfgNodes
+ init_dict = {} if init_dict is None else init_dict
+ key_list = [] if key_list is None else key_list
+ for k, v in init_dict.items():
+ if type(v) is dict:
+ # Convert dict to CfgNode
+ init_dict[k] = CfgNode(v, key_list=key_list + [k])
+ super(CfgNode, self).__init__(init_dict)
+
+ def __getattr__(self, name):
+ if name in self:
+ return self[name]
+ else:
+ raise AttributeError(name)
+
+ def __setattr__(self, name, value):
+ self[name] = value
+
+ def __str__(self):
+ def _indent(s_, num_spaces):
+ s = s_.split("\n")
+ if len(s) == 1:
+ return s_
+ first = s.pop(0)
+ s = [(num_spaces * " ") + line for line in s]
+ s = "\n".join(s)
+ s = first + "\n" + s
+ return s
+
+ r = ""
+ s = []
+ for k, v in sorted(self.items()):
+ seperator = "\n" if isinstance(v, CfgNode) else " "
+ attr_str = "{}:{}{}".format(str(k), seperator, str(v))
+ attr_str = _indent(attr_str, 2)
+ s.append(attr_str)
+ r += "\n".join(s)
+ return r
+
+ def __repr__(self):
+ return "{}({})".format(self.__class__.__name__, super(CfgNode, self).__repr__())
+
+
+def load_cfg_from_cfg_file(file):
+ cfg = {}
+ assert os.path.isfile(file) and file.endswith('.yaml'), \
+ '{} is not a yaml file'.format(file)
+
+ with open(file, 'r') as f:
+ cfg_from_file = yaml.safe_load(f)
+
+ for key in cfg_from_file:
+ for k, v in cfg_from_file[key].items():
+ cfg[k] = v
+
+ cfg = CfgNode(cfg)
+ return cfg
+
+
+def merge_cfg_from_list(cfg, cfg_list):
+ new_cfg = copy.deepcopy(cfg)
+ assert len(cfg_list) % 2 == 0
+ for full_key, v in zip(cfg_list[0::2], cfg_list[1::2]):
+ subkey = full_key.split('.')[-1]
+ assert subkey in cfg, 'Non-existent key: {}'.format(full_key)
+ value = _decode_cfg_value(v)
+ value = _check_and_coerce_cfg_value_type(
+ value, cfg[subkey], subkey, full_key
+ )
+ setattr(new_cfg, subkey, value)
+
+ return new_cfg
+
+
+def _decode_cfg_value(v):
+ """Decodes a raw config value (e.g., from a yaml config files or command
+ line argument) into a Python object.
+ """
+ # All remaining processing is only applied to strings
+ if not isinstance(v, str):
+ return v
+ # Try to interpret `v` as a:
+ # string, number, tuple, list, dict, boolean, or None
+ try:
+ v = literal_eval(v)
+ # The following two excepts allow v to pass through when it represents a
+ # string.
+ #
+ # Longer explanation:
+ # The type of v is always a string (before calling literal_eval), but
+ # sometimes it *represents* a string and other times a data structure, like
+ # a list. In the case that v represents a string, what we got back from the
+ # yaml parser is 'foo' *without quotes* (so, not '"foo"'). literal_eval is
+ # ok with '"foo"', but will raise a ValueError if given 'foo'. In other
+ # cases, like paths (v = 'foo/bar' and not v = '"foo/bar"'), literal_eval
+ # will raise a SyntaxError.
+ except ValueError:
+ pass
+ except SyntaxError:
+ pass
+ return v
+
+
+def _check_and_coerce_cfg_value_type(replacement, original, key, full_key):
+ """Checks that `replacement`, which is intended to replace `original` is of
+ the right type. The type is correct if it matches exactly or is one of a few
+ cases in which the type can be easily coerced.
+ """
+ original_type = type(original)
+ replacement_type = type(replacement)
+
+ # The types must match (with some exceptions)
+ if replacement_type == original_type:
+ return replacement
+
+ # Cast replacement from from_type to to_type if the replacement and original
+ # types match from_type and to_type
+ def conditional_cast(from_type, to_type):
+ if replacement_type == from_type and original_type == to_type:
+ return True, to_type(replacement)
+ else:
+ return False, None
+
+ # Conditionally casts
+ # list <-> tuple
+ casts = [(tuple, list), (list, tuple)]
+ # For py2: allow converting from str (bytes) to a unicode string
+ try:
+ casts.append((str, unicode)) # noqa: F821
+ except Exception:
+ pass
+
+ for (from_type, to_type) in casts:
+ converted, converted_value = conditional_cast(from_type, to_type)
+ if converted:
+ return converted_value
+
+ raise ValueError(
+ "Type mismatch ({} vs. {}) with values ({} vs. {}) for config "
+ "key: {}".format(
+ original_type, replacement_type, original, replacement, full_key
+ )
+ )
+
+def _assert_with_logging(cond, msg):
+ if not cond:
+ logger.debug(msg)
+ assert cond, msg
+
+
+def find_free_port():
+ import socket
+ sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
+ # Binding to port 0 will cause the OS to find an available port for us
+ sock.bind(("", 0))
+ port = sock.getsockname()[1]
+ sock.close()
+ # NOTE: there is still a chance the port could be taken by other processes.
+ return port
+
+
+class AverageMeter(object):
+ """Computes and stores the average and current value"""
+ def __init__(self):
+ self.reset()
+
+ def reset(self):
+ self.val = 0
+ self.avg = 0
+ self.sum = 0
+ self.count = 0
+
+ def update(self, val, n=1):
+ self.val = val
+ self.sum += val * n
+ self.count += n
+ self.avg = self.sum / self.count
diff --git a/zoo/PAConv/scene_seg/README.md b/zoo/PAConv/scene_seg/README.md
new file mode 100755
index 0000000..90b92aa
--- /dev/null
+++ b/zoo/PAConv/scene_seg/README.md
@@ -0,0 +1,94 @@
+# 3D Semantic Segmentation
+
+
+
+
+
+## Installation
+
+### Requirements
+ - Hardware: 1 GPU
+ - Software:
+ PyTorch>=1.5.0, Python>=3, CUDA>=10.2, tensorboardX, tqdm, h5py, pyYaml
+
+### Dataset
+- Download S3DIS [dataset](https://drive.google.com/drive/folders/12wLblskNVBUeryt1xaJTQlIoJac2WehV) and symlink the paths to them as follows (you can alternatively modify the relevant paths specified in folder `config`):
+ ```
+ mkdir -p dataset
+ ln -s /path_to_s3dis_dataset dataset/s3dis
+ ```
+
+## Usage
+
+1. Requirement:
+
+ - Hardware: 1 GPU to hold 6000MB for CUDA version, 2 GPUs to hold 10000MB for non-CUDA version.
+ - Software:
+ PyTorch>=1.5.0, Python3.7, CUDA>=10.2, tensorboardX, tqdm, h5py, pyYaml
+
+2. Train:
+
+ - Specify the gpu used in config and then do training:
+
+ ```shell
+ sh tool/train.sh s3dis pointnet2_paconv # non-cuda version
+ sh tool/train.sh s3dis pointnet2_paconv_cuda # cuda version
+ ```
+
+3. Test:
+
+ - Download [pretrained models](https://drive.google.com/drive/mobile/folders/10UAEjEIZLjnUndyORygwAW289kW9xMc7/1z5cRUG5d01d78rShJ2qbePMJqqiWzo4d/1zpmr_ircZduiVWDEe8yC-zQ1AfIn4GZF?usp=sharing&sort=13&direction=a) and put them under folder specified in config or modify the specified paths.
+Our CUDA-implemented PAConv achieves [66.01](https://drive.google.com/drive/folders/1h-ZusRArRpB-8T9lZe3FRYZJA3Hm7_ua) mIoU (w/o voting) and vanilla PAConv without CUDA achieves [66.33](https://drive.google.com/drive/folders/1AacPodXqK6OO-IGnVd1pPLx7pNMMhzW0) mIoU (w/o voting) in s3dis Area-5 validation set.
+
+ - For full testing (get listed performance):
+
+ ```shell
+ CUDA_VISIBLE_DEVICES=0 sh tool/test.sh s3dis pointnet2_paconv # non-cuda version
+ CUDA_VISIBLE_DEVICES=0 sh tool/test.sh s3dis pointnet2_paconv_cuda # cuda version
+ ```
+
+ - For 6-fold validation (calculating the metrics with results from different folds merged):
+ 1) Change the [test_area index](https://github.com/CVMI-Lab/PAConv/blob/main/scene_seg/config/s3dis/s3dis_pointnet2_paconv.yaml#L7) in the config file to 1;
+ 2) Finish full train and test, the test result files of Area-1 will be saved in corresponding paths after the test;
+ 3) Repeat a,b by changing the [test_area index](https://github.com/CVMI-Lab/PAConv/blob/main/scene_seg/config/s3dis/s3dis_pointnet2_paconv.yaml#L7) to 2,3,4,5,6 respectively;
+ 4) Collect all the test result files of all areas to one directory and state the path to this directory [here](https://github.com/CVMI-Lab/PAConv/blob/main/scene_seg/tool/test_s3dis_6fold.py#L52);
+ 5) Run the code for 6-fold validation to get the final 6-fold results:
+ ```shell
+ python test_s3dis_6fold.py
+ ```
+
+
+
+[comment]: <> (5. Visualization: [tensorboardX](https://github.com/lanpa/tensorboardX) incorporated for better visualization.)
+
+[comment]: <> ( ```shell)
+
+[comment]: <> ( tensorboard --logdir=run1:$EXP1,run2:$EXP2 --port=6789)
+
+[comment]: <> ( ```)
+
+
+[comment]: <> (6. Other:)
+
+[comment]: <> ( - Video predictions: Youtube [LINK]().)
+
+
+## Citation
+
+If you find our work helpful in your research, please consider citing:
+
+```
+@inproceedings{xu2021paconv,
+ title={PAConv: Position Adaptive Convolution with Dynamic Kernel Assembling on Point Clouds},
+ author={Xu, Mutian and Ding, Runyu and Zhao, Hengshuang and Qi, Xiaojuan},
+ booktitle={CVPR},
+ year={2021}
+}
+```
+
+## Contact
+
+You are welcome to send pull requests or share some ideas with us. Contact information: Mutian Xu (mino1018@outlook.com) or Runyu Ding (ryding@eee.hku.hk).
+
+## Acknowledgement
+The code is partially borrowed from [PointWeb](https://github.com/hszhao/PointWeb).
diff --git a/zoo/PAConv/scene_seg/config/s3dis/s3dis_pointnet2_paconv.yaml b/zoo/PAConv/scene_seg/config/s3dis/s3dis_pointnet2_paconv.yaml
new file mode 100755
index 0000000..5b4dda7
--- /dev/null
+++ b/zoo/PAConv/scene_seg/config/s3dis/s3dis_pointnet2_paconv.yaml
@@ -0,0 +1,58 @@
+DATA:
+ data_name: s3dis
+ data_root: dataset/s3dis
+ train_list: dataset/s3dis/list/train12346.txt
+ train_full_folder: dataset/s3dis/trainval_fullarea
+ val_list: dataset/s3dis/list/val5.txt
+ test_area: 5
+ classes: 13
+ fea_dim: 6 # point feature dimension
+ block_size: 1.0
+ stride_rate: 0.5
+ sample_rate: 1.0
+ num_point: 4096 # point number [default: 4096]
+
+TRAIN:
+ arch: pointnet2_paconv_seg
+ use_xyz: True
+ sync_bn: True # adopt sync_bn or not
+ ignore_label: 255
+ train_gpu:
+ train_workers: 8 # data loader workers
+ train_batch_size: 16 # batch size for training
+ train_batch_size_val: 8 # batch size for validation during training, memory and speed tradeoff
+ base_lr: 0.05
+ epochs: 100
+ start_epoch: 0
+ step_epoch: 30
+ multiplier: 0.1
+ momentum: 0.9
+ weight_decay: 0.0001
+ manual_seed:
+ print_freq: 100
+ save_freq: 1
+ save_path: exp/s3dis/pointnet2_paconv/model
+ weight: # path to initial weight (default: none)
+ resume:
+ evaluate: True # evaluate on validation set, extra gpu memory needed and small batch_size_val is recommend
+ m: 16
+ paconv: [True, True, True, True, False, False, False, False]
+ score_input: ed7
+ kernel_input: neighbor
+ hidden: [16, 16, 16]
+ no_transformation: False
+ color_augment: 0.0
+ norm_no_trans: True
+ correlation_loss: True
+ correlation_loss_scale: 10.0
+
+TEST:
+ test_list: dataset/s3dis/list/val5.txt
+ test_list_full: dataset/s3dis/list/val5_full.txt
+ split: val # split in [train, val and test]
+ test_gpu: [0]
+ test_workers: 4
+ test_batch_size: 8
+ model_path: exp/s3dis/pointnet2_paconv/model/best_train.pth
+ save_folder: exp/s3dis/pointnet2_paconv/result/best_epoch/val5_0.5 # results save folder
+ names_path: data/s3dis/s3dis_names.txt
diff --git a/zoo/PAConv/scene_seg/config/s3dis/s3dis_pointnet2_paconv_cuda.yaml b/zoo/PAConv/scene_seg/config/s3dis/s3dis_pointnet2_paconv_cuda.yaml
new file mode 100755
index 0000000..f235c33
--- /dev/null
+++ b/zoo/PAConv/scene_seg/config/s3dis/s3dis_pointnet2_paconv_cuda.yaml
@@ -0,0 +1,60 @@
+DATA:
+ data_name: s3dis
+ data_root: dataset/s3dis
+ train_list: dataset/s3dis/list/train12346.txt
+ train_full_folder: dataset/s3dis/trainval_fullarea
+ val_list: dataset/s3dis/list/val5.txt
+ test_area: 5
+ classes: 13
+ fea_dim: 6 # point feature dimension
+ block_size: 1.0
+ stride_rate: 0.5
+ sample_rate: 1.0
+ num_point: 4096 # point number [default: 4096]
+
+TRAIN:
+ arch: pointnet2_paconv_seg
+ use_xyz: True
+ sync_bn: True # adopt sync_bn or not
+ ignore_label: 255
+ train_gpu:
+ train_workers: 8 # data loader workers
+ train_batch_size: 16 # batch size for training
+ train_batch_size_val: 8 # batch size for validation during training, memory and speed tradeoff
+ base_lr: 0.05
+ epochs: 100
+ start_epoch: 0
+ step_epoch: 30
+ multiplier: 0.1
+ lr_multidecay: True
+ momentum: 0.9
+ weight_decay: 0.0001
+ manual_seed:
+ print_freq: 100
+ save_freq: 1
+ save_path: exp/s3dis/pointnet2_paconv_cuda/model
+ weight: # path to initial weight (default: none)
+ resume:
+ evaluate: True # evaluate on validation set, extra gpu memory needed and small batch_size_val is recommend
+ m: 16
+ paconv: [True, True, True, True, False, False, False, False]
+ score_input: ed7
+ kernel_input: neighbor
+ hidden: [8, 16, 16]
+ no_transformation: False
+ color_augment: 0.0
+ norm_no_trans: True
+ correlation_loss: True
+ correlation_loss_scale: 10.0
+ cuda: True
+
+TEST:
+ test_list: dataset/s3dis/list/val5.txt
+ test_list_full: dataset/s3dis/list/val5_full.txt
+ split: val # split in [train, val and test]
+ test_gpu: [0]
+ test_workers: 4
+ test_batch_size: 8
+ model_path: exp/s3dis/pointnet2_paconv_cuda/model/best_train.pth
+ save_folder: exp/s3dis/pointnet2_paconv_cuda/result/best_epoch/val5_0.5 # results save folder
+ names_path: data/s3dis/s3dis_names.txt
diff --git a/zoo/PAConv/scene_seg/data/s3dis/s3dis_names.txt b/zoo/PAConv/scene_seg/data/s3dis/s3dis_names.txt
new file mode 100755
index 0000000..2defe3d
--- /dev/null
+++ b/zoo/PAConv/scene_seg/data/s3dis/s3dis_names.txt
@@ -0,0 +1,13 @@
+ceiling
+floor
+wall
+beam
+column
+window
+door
+chair
+table
+bookcase
+sofa
+board
+clutter
diff --git a/zoo/PAConv/scene_seg/figure/paconv.jpg b/zoo/PAConv/scene_seg/figure/paconv.jpg
new file mode 100644
index 0000000..0e31dbb
Binary files /dev/null and b/zoo/PAConv/scene_seg/figure/paconv.jpg differ
diff --git a/zoo/PAConv/scene_seg/figure/semseg_vis.jpg b/zoo/PAConv/scene_seg/figure/semseg_vis.jpg
new file mode 100644
index 0000000..5493302
Binary files /dev/null and b/zoo/PAConv/scene_seg/figure/semseg_vis.jpg differ
diff --git a/zoo/PAConv/scene_seg/model/__init__.py b/zoo/PAConv/scene_seg/model/__init__.py
new file mode 100644
index 0000000..e69de29
diff --git a/zoo/PAConv/scene_seg/model/pointnet/pointnet.py b/zoo/PAConv/scene_seg/model/pointnet/pointnet.py
new file mode 100755
index 0000000..601a30e
--- /dev/null
+++ b/zoo/PAConv/scene_seg/model/pointnet/pointnet.py
@@ -0,0 +1,155 @@
+import torch
+import torch.nn as nn
+import torch.nn.functional as F
+
+
+class STN3D(nn.Module):
+ def __init__(self, c):
+ super(STN3D, self).__init__()
+ self.c = c
+ self.conv1 = nn.Conv1d(self.c, 64, 1)
+ self.conv2 = nn.Conv1d(64, 128, 1)
+ self.conv3 = nn.Conv1d(128, 1024, 1)
+ self.mp = nn.AdaptiveMaxPool1d(1)
+ self.fc1 = nn.Linear(1024, 512)
+ self.fc2 = nn.Linear(512, 256)
+ self.fc3 = nn.Linear(256, self.c*self.c)
+
+ self.bn1 = nn.BatchNorm1d(64)
+ self.bn2 = nn.BatchNorm1d(128)
+ self.bn3 = nn.BatchNorm1d(1024)
+ self.bn4 = nn.BatchNorm1d(512)
+ self.bn5 = nn.BatchNorm1d(256)
+
+ def forward(self, x):
+ batch_size = x.size()[0]
+ x = F.relu(self.bn1(self.conv1(x)))
+ x = F.relu(self.bn2(self.conv2(x)))
+ x = F.relu(self.bn3(self.conv3(x)))
+ x = self.mp(x)
+ x = x.view(-1, 1024)
+ x = F.relu(self.bn4(self.fc1(x)))
+ x = F.relu(self.bn5(self.fc2(x)))
+ x = self.fc3(x)
+
+ iden = torch.eye(self.c).view(1, -1).repeat(batch_size, 1)
+ if x.is_cuda:
+ iden = iden.cuda()
+ x = x + iden
+ x = x.view(-1, self.c, self.c)
+ return x
+
+
+class PointNetFeat(nn.Module):
+ def __init__(self, c=3, global_feat=True):
+ super(PointNetFeat, self).__init__()
+ self.global_feat = global_feat
+ self.stn1 = STN3D(c)
+ self.conv1 = nn.Conv1d(c, 64, 1)
+ self.conv2 = nn.Conv1d(64, 64, 1)
+ self.stn2 = STN3D(64)
+ self.conv3 = nn.Conv1d(64, 64, 1)
+ self.conv4 = nn.Conv1d(64, 128, 1)
+ self.conv5 = nn.Conv1d(128, 1024, 1)
+ self.mp = nn.AdaptiveMaxPool1d(1)
+
+ self.bn1 = nn.BatchNorm1d(64)
+ self.bn2 = nn.BatchNorm1d(64)
+ self.bn3 = nn.BatchNorm1d(64)
+ self.bn4 = nn.BatchNorm1d(128)
+ self.bn5 = nn.BatchNorm1d(1024)
+
+ def forward(self, x):
+ stn1 = self.stn1(x)
+ x = torch.bmm(stn1, x)
+ x = F.relu(self.bn1(self.conv1(x)))
+ x = F.relu(self.bn2(self.conv2(x)))
+ stn2 = self.stn2(x)
+ x_tmp = torch.bmm(stn2, x)
+ x = F.relu(self.bn3(self.conv3(x_tmp)))
+ x = F.relu(self.bn4(self.conv4(x)))
+ x = F.relu(self.bn5(self.conv5(x)))
+ x = self.mp(x)
+ x = x.view(-1, 1024)
+
+ if not self.global_feat:
+ x = x.view(-1, 1024, 1).repeat(1, 1, x_tmp.size()[2])
+ x = torch.cat([x_tmp, x], 1)
+ return x
+
+
+class PointNetCls(nn.Module):
+ def __init__(self, c=3, k=40, dropout=0.3, sync_bn=False):
+ super(PointNetCls, self).__init__()
+ self.feat = PointNetFeat(c, global_feat=True)
+ self.fc1 = nn.Linear(1024, 512)
+ self.fc2 = nn.Linear(512, 256)
+ self.fc3 = nn.Linear(256, k)
+
+ self.bn1 = nn.BatchNorm1d(512)
+ self.bn2 = nn.BatchNorm1d(256)
+ self.dropout = nn.Dropout(p=dropout)
+
+ def forward(self, x):
+ x = x.transpose(1, 2)
+ x = self.feat(x)
+ x = F.relu(self.bn1(self.fc1(x)))
+ x = F.relu(self.bn2(self.fc2(x)))
+ x = self.dropout(x)
+ x = self.fc3(x)
+ return x
+
+
+# Segmentation with 9 channels input XYZ, RGB and normalized location to the room (from 0 to 1), with STN3D on input and feature
+class PointNetSeg(nn.Module):
+ def __init__(self, c=9, k=13, sync_bn=False):
+ super(PointNetSeg, self).__init__()
+ self.feat = PointNetFeat(c, global_feat=False)
+ self.conv1 = nn.Conv1d(1088, 512, 1)
+ self.conv2 = nn.Conv1d(512, 256, 1)
+ self.conv3 = nn.Conv1d(256, 128, 1)
+ self.conv4 = nn.Conv1d(128, 128, 1)
+ self.conv5 = nn.Conv1d(128, k, 1)
+
+ self.bn1 = nn.BatchNorm1d(512)
+ self.bn2 = nn.BatchNorm1d(256)
+ self.bn3 = nn.BatchNorm1d(128)
+ self.bn4 = nn.BatchNorm1d(128)
+
+ def forward(self, x):
+ x = x.transpose(1, 2)
+ x = self.feat(x)
+ x = F.relu(self.bn1(self.conv1(x)))
+ x = F.relu(self.bn2(self.conv2(x)))
+ x = F.relu(self.bn3(self.conv3(x)))
+ x = F.relu(self.bn4(self.conv4(x)))
+ x = self.conv5(x)
+ return x
+
+
+if __name__ == '__main__':
+ import os
+ os.environ["CUDA_VISIBLE_DEVICES"] = '0'
+
+ sim_data = torch.rand(16, 2048, 3)
+
+ trans = STN3D(c=3)
+ out = trans(sim_data.transpose(1, 2))
+ print('stn', out.size())
+
+ point_feat = PointNetFeat(global_feat=True)
+ out = point_feat(sim_data.transpose(1, 2))
+ print('global feat', out.size())
+
+ point_feat = PointNetFeat(global_feat=False)
+ out = point_feat(sim_data.transpose(1, 2))
+ print('point feat', out.size())
+
+ cls = PointNetCls(c=3, k=40)
+ out = cls(sim_data)
+ print('class', out.size())
+
+ sim_data = torch.rand(16, 2048, 9)
+ seg = PointNetSeg(c=9, k=13)
+ out = seg(sim_data)
+ print('seg', out.size())
diff --git a/zoo/PAConv/scene_seg/model/pointnet2/__init__.py b/zoo/PAConv/scene_seg/model/pointnet2/__init__.py
new file mode 100644
index 0000000..e69de29
diff --git a/zoo/PAConv/scene_seg/model/pointnet2/paconv.py b/zoo/PAConv/scene_seg/model/pointnet2/paconv.py
new file mode 100644
index 0000000..46b5e36
--- /dev/null
+++ b/zoo/PAConv/scene_seg/model/pointnet2/paconv.py
@@ -0,0 +1,257 @@
+from typing import List, Tuple
+
+import torch
+import torch.nn as nn
+import torch.nn.functional as F
+import copy
+
+from util.paconv_util import weight_init, assign_score, get_ed, assign_kernel_withoutk
+from lib.paconv_lib.functional import assign_score_withk as assign_score_cuda
+
+
+class ScoreNet(nn.Module):
+
+ def __init__(self, in_channel, out_channel, hidden_unit=[8, 8], last_bn=False, temp=1):
+ super(ScoreNet, self).__init__()
+ self.hidden_unit = hidden_unit
+ self.last_bn = last_bn
+ self.mlp_convs_hidden = nn.ModuleList()
+ self.mlp_bns_hidden = nn.ModuleList()
+ self.temp = temp
+
+ hidden_unit = list() if hidden_unit is None else copy.deepcopy(hidden_unit)
+ hidden_unit.append(out_channel)
+ hidden_unit.insert(0, in_channel)
+
+ for i in range(1, len(hidden_unit)): # from 1st hidden to next hidden to last hidden
+ self.mlp_convs_hidden.append(nn.Conv2d(hidden_unit[i - 1], hidden_unit[i], 1,
+ bias=False if i < len(hidden_unit) - 1 else not last_bn))
+ self.mlp_bns_hidden.append(nn.BatchNorm2d(hidden_unit[i]))
+
+ def forward(self, xyz, score_norm='softmax'):
+ # xyz : B*3*N*K
+ B, _, N, K = xyz.size()
+ scores = xyz
+
+ for i, conv in enumerate(self.mlp_convs_hidden):
+ if i < len(self.mlp_convs_hidden) - 1:
+ scores = F.relu(self.mlp_bns_hidden[i](conv(scores)))
+ else: # if the output layer, no ReLU
+ scores = conv(scores)
+ if self.last_bn:
+ scores = self.mlp_bns_hidden[i](scores)
+ if score_norm == 'softmax':
+ scores = F.softmax(scores/self.temp, dim=1) # + 0.5 # B*m*N*K
+ elif score_norm == 'sigmoid':
+ scores = torch.sigmoid(scores/self.temp) # + 0.5 # B*m*N*K
+ elif score_norm is None:
+ scores = scores
+ else:
+ raise ValueError('Not Implemented!')
+
+ scores = scores.permute(0, 2, 3, 1) # B*N*K*m
+
+ return scores
+
+
+class PAConv(nn.Module):
+
+ def __init__(self, input_dim, output_dim, bn, activation, config):
+ super().__init__()
+ self.score_input = config.get('score_input', 'identity')
+ self.score_norm = config.get('score_norm', 'softmax')
+ self.temp = config.get('temp', 1)
+ self.init = config.get('init', 'kaiming')
+ self.hidden = config.get('hidden', [16])
+ self.m = config.get('m', 8)
+ self.kernel_input = config.get('kernel_input', 'neighbor')
+ self.input_dim = input_dim
+ self.output_dim = output_dim
+
+ self.bn = nn.BatchNorm2d(output_dim, momentum=0.1) if bn else None
+ self.activation = activation
+
+ if self.kernel_input == 'identity':
+ self.kernel_mul = 1
+ elif self.kernel_input == 'neighbor':
+ self.kernel_mul = 2
+ else:
+ raise ValueError()
+
+ if self.score_input == 'identity':
+ self.scorenet_input_dim = 3
+ elif self.score_input == 'neighbor':
+ self.scorenet_input_dim = 6
+ elif self.score_input == 'ed7':
+ self.scorenet_input_dim = 7
+ elif self.score_input == 'ed':
+ self.scorenet_input_dim = 10
+ else: raise ValueError()
+
+ if self.init == "kaiming":
+ _init = nn.init.kaiming_normal_
+ elif self.init == "xavier":
+ _init = nn.init.xavier_normal_
+ else:
+ raise ValueError('Not implemented!')
+
+ self.scorenet = ScoreNet(self.scorenet_input_dim, self.m, hidden_unit=self.hidden, last_bn=False, temp=self.temp)
+
+ tensor1 = _init(torch.empty(self.m, input_dim * self.kernel_mul, output_dim)).contiguous()
+ tensor1 = tensor1.permute(1, 0, 2).reshape(input_dim * self.kernel_mul, self.m * output_dim)
+ self.weightbank = nn.Parameter(tensor1, requires_grad=True)
+
+ for m in self.modules():
+ weight_init(m)
+
+ def forward(self, args):
+ r"""
+ Parameters
+ ----------
+ in_feat : torch.Tensor
+ (B, C, N1, K) tensor of the descriptors of the the features
+ grouped_xyz : torch.Tensor
+ (B, 3, N1, K) tensor of the descriptors of the the features
+ Returns
+ -------
+ out_feat : torch.Tensor
+ (B, C, N1, \sum_k(mlps[k][-1])) tensor of the new_features descriptors
+ """
+
+ in_feat, grouped_xyz = args
+ B, _, N1, K = in_feat.size()
+ center_xyz = grouped_xyz[..., :1].repeat(1, 1, 1, K)
+ grouped_xyz_diff = grouped_xyz - center_xyz # b,3,n1,k
+ if self.kernel_input == 'neighbor':
+ in_feat_c = in_feat[..., :1].repeat(1, 1, 1, K)
+ in_feat_diff = in_feat - in_feat_c
+ in_feat = torch.cat((in_feat_diff, in_feat), dim=1)
+
+ ed = get_ed(center_xyz.permute(0, 2, 3, 1).reshape(B * N1 * K, -1),
+ grouped_xyz.permute(0, 2, 3, 1).reshape(B * N1 * K, -1)).reshape(B, 1, N1, K)
+ if self.score_input == 'neighbor':
+ xyz = torch.cat((grouped_xyz_diff, grouped_xyz), dim=1)
+ elif self.score_input == 'identity':
+ xyz = grouped_xyz_diff
+ elif self.score_input == 'ed7':
+ xyz = torch.cat((center_xyz, grouped_xyz_diff, ed), dim=1)
+ elif self.score_input == 'ed10':
+ xyz = torch.cat((center_xyz, grouped_xyz, grouped_xyz_diff, ed), dim=1)
+ else:
+ raise NotImplementedError
+
+ scores = self.scorenet(xyz, score_norm=self.score_norm) # b,n,k,m
+ out_feat = torch.matmul(in_feat.permute(0, 2, 3, 1), self.weightbank).view(B, N1, K, self.m, -1) # b,n1,k,m,cout
+ out_feat = assign_score(score=scores, point_input=out_feat) # b,n,k,o1,
+ out_feat = out_feat.permute(0, 3, 1, 2) # b,o1,n,k
+
+ if self.bn is not None:
+ out_feat = self.bn(out_feat)
+ if self.activation is not None:
+ out_feat = self.activation(out_feat)
+
+ return out_feat, grouped_xyz # b,o1,n,k b,3,n1,k
+
+ def __repr__(self):
+ return 'PAConv(in_feat: {:d}, out_feat: {:d}, m: {:d}, hidden: {}, scorenet_input: {}, kernel_size: {})'.\
+ format(self.input_dim, self.output_dim, self.m, self.hidden, self.scorenet_input_dim, self.weightbank.shape)
+
+
+class PAConvCUDA(PAConv):
+
+ def __init__(self, input_dim, output_dim, bn, activation, config):
+ super(PAConvCUDA, self).__init__(input_dim, output_dim, bn, activation, config)
+
+ def forward(self, args):
+
+ r"""
+ Parameters
+ ----------
+ in_feat : torch.Tensor
+ (B, C, N0) tensor of the descriptors of the the features
+ grouped_xyz : torch.Tensor
+ (B, 3, N1, K) tensor of the descriptors of the the features
+ grouped_idx : torch.Tensor
+ (B, N1, K) tensor of the descriptors of the the features
+ Returns
+ -------
+ out_feat : torch.Tensor
+ (B, C, N1) tensor of the new_features descriptors
+ new_xyz : torch.Tensor
+ (B, N1, 3) tensor of the new features' xyz
+ """
+ in_feat, grouped_xyz, grouped_idx = args
+ B, Cin, N0 = in_feat.size()
+ _, _, N1, K = grouped_xyz.size()
+ center_xyz = grouped_xyz[..., :1].repeat(1, 1, 1, K)
+ grouped_xyz_diff = grouped_xyz - center_xyz # [B, 3, N1, K]
+
+ ed = get_ed(center_xyz.permute(0, 2, 3, 1).reshape(B * N1 * K, -1),
+ grouped_xyz.permute(0, 2, 3, 1).reshape(B * N1 * K, -1)).reshape(B, 1, N1, K)
+
+ if self.score_input == 'neighbor':
+ xyz = torch.cat((grouped_xyz_diff, grouped_xyz), dim=1)
+ elif self.score_input == 'identity':
+ xyz = grouped_xyz_diff
+ elif self.score_input == 'ed7':
+ xyz = torch.cat((center_xyz, grouped_xyz_diff, ed), dim=1)
+ elif self.score_input == 'ed':
+ xyz = torch.cat((center_xyz, grouped_xyz, grouped_xyz_diff, ed), dim=1)
+ else:
+ raise NotImplementedError
+
+ scores = self.scorenet(xyz, score_norm=self.score_norm) # b,n1,k,m
+ kernel_feat, half_kernel_feat = assign_kernel_withoutk(in_feat, self.weightbank, self.m)
+ out_feat = assign_score_cuda(scores, kernel_feat, half_kernel_feat, grouped_idx, aggregate='sum') # b,o1,n1,k
+ if self.bn is not None:
+ out_feat = self.bn(out_feat)
+ if self.activation is not None:
+ out_feat = self.activation(out_feat)
+
+ return out_feat, grouped_xyz, grouped_idx # b,o1,n,k
+
+ def __repr__(self):
+ return 'PAConvCUDA(in_feat: {:d}, out_feat: {:d}, m: {:d}, hidden: {}, scorenet_input: {}, kernel_size: {})'.\
+ format(self.input_dim, self.output_dim, self.m, self.hidden, self.scorenet_input_dim, self.weightbank.shape)
+
+
+class SharedPAConv(nn.Sequential):
+
+ def __init__(
+ self,
+ args: List[int],
+ *,
+ config,
+ bn: bool = False,
+ activation=nn.ReLU(inplace=True),
+ preact: bool = False,
+ first: bool = False,
+ name: str = "",
+ ):
+ super().__init__()
+
+ for i in range(len(args) - 1):
+ if config.get('cuda', False):
+ self.add_module(
+ name + 'layer{}'.format(i),
+ PAConvCUDA(
+ args[i],
+ args[i + 1],
+ bn=(not first or not preact or (i != 0)) and bn,
+ activation=activation
+ if (not first or not preact or (i != 0)) else None,
+ config=config,
+ )
+ )
+ else:
+ self.add_module(
+ name + 'layer{}'.format(i),
+ PAConv(
+ args[i],
+ args[i + 1],
+ bn=(not first or not preact or (i != 0)) and bn,
+ activation=activation
+ if (not first or not preact or (i != 0)) else None,
+ config=config,
+ )
+ )
diff --git a/zoo/PAConv/scene_seg/model/pointnet2/pointnet2_modules.py b/zoo/PAConv/scene_seg/model/pointnet2/pointnet2_modules.py
new file mode 100644
index 0000000..7d29dd8
--- /dev/null
+++ b/zoo/PAConv/scene_seg/model/pointnet2/pointnet2_modules.py
@@ -0,0 +1,169 @@
+from typing import List
+
+import torch
+import torch.nn as nn
+import torch.nn.functional as F
+
+from lib.pointops.functions import pointops
+from util import block
+
+
+class _PointNet2SAModuleBase(nn.Module):
+ def __init__(self):
+ super().__init__()
+ self.npoint = None
+ self.groupers = None
+ self.mlps = None
+
+ def forward(self, xyz: torch.Tensor, features: torch.Tensor = None) -> (torch.Tensor, torch.Tensor):
+ r"""
+ Parameters
+ ----------
+ xyz : torch.Tensor
+ (B, N, 3) tensor of the xyz coordinates of the features
+ features : torch.Tensor
+ (B, C, N) tensor of the descriptors of the the features
+ Returns
+ -------
+ new_xyz : torch.Tensor
+ (B, npoint, 3) tensor of the new features' xyz
+ new_features : torch.Tensor
+ (B, \sum_k(mlps[k][-1], npoint)) tensor of the new_features descriptors
+ """
+ new_features_list = []
+ xyz_trans = xyz.transpose(1, 2).contiguous()
+ new_xyz = pointops.gathering(
+ xyz_trans,
+ pointops.furthestsampling(xyz, self.npoint)
+ ).transpose(1, 2).contiguous() if self.npoint is not None else None
+ for i in range(len(self.groupers)):
+ new_features, _ = self.groupers[i](xyz, new_xyz, features) # (B, C, npoint, nsample)
+ new_features = self.mlps[i](new_features) # (B, mlp[-1], npoint, nsample)
+ new_features = F.max_pool2d(new_features, kernel_size=[1, new_features.size(3)]) # (B, mlp[-1], npoint, 1)
+ new_features = new_features.squeeze(-1) # (B, mlp[-1], npoint)
+ new_features_list.append(new_features)
+ return new_xyz, torch.cat(new_features_list, dim=1)
+
+
+class PointNet2SAModuleMSG(_PointNet2SAModuleBase):
+ r"""Pointnet set abstraction layer with multiscale grouping
+ Parameters
+ ----------
+ npoint : int
+ Number of features
+ radii : list of float32
+ list of radii to group with
+ nsamples : list of int32
+ Number of samples in each ball query
+ mlps : list of list of int32
+ Spec of the pointnet_old before the global max_pool for each scale
+ bn : bool
+ Use batchnorm
+ """
+ def __init__(self, *, npoint: int, radii: List[float], nsamples: List[int], mlps: List[List[int]], bn: bool = True, use_xyz: bool = True):
+ super().__init__()
+ assert len(radii) == len(nsamples) == len(mlps)
+ self.npoint = npoint
+ self.groupers = nn.ModuleList()
+ self.mlps = nn.ModuleList()
+ for i in range(len(radii)):
+ radius = radii[i]
+ nsample = nsamples[i]
+ self.groupers.append(
+ pointops.QueryAndGroup(radius, nsample, use_xyz=use_xyz)
+ if npoint is not None else pointops.GroupAll(use_xyz)
+ )
+ mlp_spec = mlps[i]
+ if use_xyz:
+ mlp_spec[0] += 3
+ self.mlps.append(block.SharedMLP(mlp_spec, bn=bn))
+
+
+class PointNet2SAModule(PointNet2SAModuleMSG):
+ r"""Pointnet set abstraction layer
+ Parameters
+ ----------
+ npoint : int
+ Number of features
+ radius : float
+ Radius of ball
+ nsample : int
+ Number of samples in the ball query
+ mlp : list
+ Spec of the pointnet_old before the global max_pool
+ bn : bool
+ Use batchnorm
+ """
+ def __init__(self, *, mlp: List[int], npoint: int = None, radius: float = None, nsample: int = None, bn: bool = True, use_xyz: bool = True):
+ super().__init__(mlps=[mlp], npoint=npoint, radii=[radius], nsamples=[nsample], bn=bn, use_xyz=use_xyz)
+
+
+class PointNet2FPModule(nn.Module):
+ r"""Propagates the features of one set to another
+ Parameters
+ ----------
+ mlp : list
+ Pointnet module parameters
+ bn : bool
+ Use batchnorm
+ """
+ def __init__(self, *, mlp: List[int], bn: bool = True):
+ super().__init__()
+ self.mlp = block.SharedMLP(mlp, bn=bn)
+
+ def forward(self, unknown: torch.Tensor, known: torch.Tensor, unknow_feats: torch.Tensor, known_feats: torch.Tensor) -> torch.Tensor:
+ r"""
+ Parameters
+ ----------
+ unknown : torch.Tensor
+ (B, n, 3) tensor of the xyz positions of the unknown features
+ known : torch.Tensor
+ (B, m, 3) tensor of the xyz positions of the known features
+ unknow_feats : torch.Tensor
+ (B, C1, n) tensor of the features to be propigated to
+ known_feats : torch.Tensor
+ (B, C2, m) tensor of features to be propigated
+ Returns
+ -------
+ new_features : torch.Tensor
+ (B, mlp[-1], n) tensor of the features of the unknown features
+ """
+
+ if known is not None:
+ dist, idx = pointops.nearestneighbor(unknown, known)
+ dist_recip = 1.0 / (dist + 1e-8)
+ norm = torch.sum(dist_recip, dim=2, keepdim=True)
+ weight = dist_recip / norm
+ interpolated_feats = pointops.interpolation(known_feats, idx, weight)
+ else:
+ interpolated_feats = known_feats.expand(*known_feats.size()[0:2], unknown.size(1))
+
+ if unknow_feats is not None:
+ new_features = torch.cat([interpolated_feats, unknow_feats], dim=1) # (B, C2 + C1, n)
+ else:
+ new_features = interpolated_feats
+ return self.mlp(new_features.unsqueeze(-1)).squeeze(-1)
+
+
+if __name__ == "__main__":
+ torch.manual_seed(1)
+ torch.cuda.manual_seed_all(1)
+ xyz = torch.randn(2, 9, 3, requires_grad=True).cuda()
+ xyz_feats = torch.randn(2, 9, 6, requires_grad=True).cuda()
+
+ test_module = PointNet2SAModuleMSG(npoint=2, radii=[5.0, 10.0], nsamples=[6, 3], mlps=[[9, 3], [9, 6]])
+ test_module.cuda()
+ print(test_module(xyz, xyz_feats))
+
+ # test_module = PointNet2FPModule(mlp=[6, 6])
+ # test_module.cuda()
+ # from torch.autograd import gradcheck
+ # inputs = (xyz, xyz, None, xyz_feats)
+ # test = gradcheck(test_module, inputs, eps=1e-6, atol=1e-4)
+ # print(test)
+
+ for _ in range(1):
+ _, new_features = test_module(xyz, xyz_feats)
+ new_features.backward(torch.cuda.FloatTensor(*new_features.size()).fill_(1))
+ print(new_features)
+ print(xyz.grad)
diff --git a/zoo/PAConv/scene_seg/model/pointnet2/pointnet2_paconv_modules.py b/zoo/PAConv/scene_seg/model/pointnet2/pointnet2_paconv_modules.py
new file mode 100644
index 0000000..a38e7af
--- /dev/null
+++ b/zoo/PAConv/scene_seg/model/pointnet2/pointnet2_paconv_modules.py
@@ -0,0 +1,261 @@
+from typing import List
+
+import torch
+import torch.nn as nn
+import torch.nn.functional as F
+
+from lib.pointops.functions import pointops
+from util import block
+from model.pointnet2 import paconv
+
+
+class _PointNet2SAModuleBase(nn.Module):
+ def __init__(self):
+ super().__init__()
+ self.npoint = None
+ self.groupers = None
+ self.mlps = None
+
+ def forward(self, xyz: torch.Tensor, features: torch.Tensor = None) -> (torch.Tensor, torch.Tensor):
+ r"""
+ Parameters
+ ----------
+ xyz : torch.Tensor
+ (B, N0, 3) tensor of the xyz coordinates of the features
+ features : torch.Tensor
+ (B, Cin, N) tensor of the descriptors of the the features
+ Returns
+ -------
+ new_xyz : torch.Tensor
+ (B, N1, 3) tensor of the new features' xyz
+ new_features : torch.Tensor
+ (B, Cout, N1)) tensor of the new_features descriptors
+ """
+ new_features_list = []
+ xyz_trans = xyz.transpose(1, 2).contiguous()
+ if self.npoint is None:
+ self.npoint = xyz.shape[1] // 4
+ new_xyz_idx = pointops.furthestsampling(xyz, self.npoint) # (B, N1)
+ new_xyz = pointops.gathering(
+ xyz_trans,
+ new_xyz_idx
+ ).transpose(1, 2).contiguous() if self.npoint is not None else None # (B, N1, 3)
+ for i in range(len(self.groupers)):
+ new_features, grouped_xyz, _ = self.groupers[i](xyz, new_xyz, features)
+ # (B, Cin+3, N1, K), (B, 3, N1, K)
+ if isinstance(self.mlps[i], paconv.SharedPAConv):
+ new_features = self.mlps[i]((new_features, grouped_xyz))[0] # (B, Cout, N1, K)
+ else:
+ new_features = self.mlps[i](new_features) # (B, Cout, N1, K)
+ if self.agg == 'max':
+ new_features = F.max_pool2d(new_features, kernel_size=[1, new_features.size(-1)]) # (B, Cout, N1, 1)
+ elif self.agg == 'sum':
+ new_features = torch.sum(new_features, dim=-1, keepdim=True) # (B, Cout, N1, 1)
+ elif self.agg == 'avg':
+ new_features = torch.mean(new_features, dim=-1, keepdim=True) # (B, Cout, N1, 1)
+ else:
+ raise ValueError('Not implemented aggregation mode.')
+ new_features = new_features.squeeze(-1) # (B, Cout, N1)
+ new_features_list.append(new_features)
+ return new_xyz, torch.cat(new_features_list, dim=1)
+
+
+class PointNet2SAModuleMSG(_PointNet2SAModuleBase):
+ r"""Pointnet set abstraction layer with multiscale grouping
+ Parameters
+ ----------
+ npoint : int
+ Number of features
+ radii : list of float32
+ list of radii to group with
+ nsamples : list of int32
+ Number of samples in each ball query
+ mlps : list of list of int32
+ Spec of the pointnet_old before the global max_pool for each scale
+ bn : bool
+ Use batchnorm
+ """
+ def __init__(self, *, npoint: int, radii: List[float], nsamples: List[int], mlps: List[List[int]], bn: bool = True, use_xyz: bool = True, use_paconv: bool = False, voxel_size=None, args=None):
+ super().__init__()
+ assert len(radii) == len(nsamples) == len(mlps)
+ self.npoint = npoint
+ self.groupers = nn.ModuleList()
+ self.mlps = nn.ModuleList()
+ self.use_xyz = use_xyz
+ self.agg = args.get('agg', 'max')
+ self.sampling = args.get('sampling', 'fps')
+ self.voxel_size = voxel_size
+ for i in range(len(radii)):
+ radius = radii[i]
+ nsample = nsamples[i]
+ self.groupers.append(
+ pointops.QueryAndGroup(radius, nsample, use_xyz=use_xyz, return_idx=True)
+ # if npoint is not None else pointops.GroupAll(use_xyz=use_xyz)
+ )
+ mlp_spec = mlps[i]
+ if use_xyz:
+ mlp_spec[0] += 3
+ if use_paconv:
+ self.mlps.append(paconv.SharedPAConv(mlp_spec, bn=bn, config=args))
+ else:
+ self.mlps.append(block.SharedMLP(mlp_spec, bn=bn))
+
+
+class PointNet2SAModule(PointNet2SAModuleMSG):
+ r"""Pointnet set abstraction layer
+ Parameters
+ ----------
+ npoint : int
+ Number of features
+ radius : float
+ Radius of ball
+ nsample : int
+ Number of samples in the ball query
+ mlp : list
+ Spec of the pointnet_old before the global max_pool
+ bn : bool
+ Use batchnorm
+ """
+ def __init__(self, *, mlp: List[int], npoint: int = None, radius: float = None, nsample: int = None, bn: bool = True, use_xyz: bool = True, use_paconv: bool = False, args=None):
+ super().__init__(mlps=[mlp], npoint=npoint, radii=[radius], nsamples=[nsample], bn=bn, use_xyz=use_xyz, use_paconv=use_paconv, args=args)
+
+
+class PointNet2SAModuleCUDA(PointNet2SAModuleMSG):
+ r"""Pointnet set abstraction layer
+ Parameters
+ ----------
+ npoint : int
+ Number of features
+ radius : float
+ Radius of ball
+ nsample : int
+ Number of samples in the ball query
+ mlp : list
+ Spec of the pointnet_old before the global max_pool
+ bn : bool
+ Use batchnorm
+ """
+ def __init__(self, *, mlp: List[int], npoint: int = None, radius: float = None, nsample: int = None, bn: bool = True, use_xyz: bool = True, use_paconv: bool = False, args=None):
+ super().__init__(mlps=[mlp], npoint=npoint, radii=[radius], nsamples=[nsample], bn=bn, use_xyz=use_xyz, use_paconv=use_paconv, args=args)
+
+ def forward(self, xyz: torch.Tensor, features: torch.Tensor = None) -> (torch.Tensor, torch.Tensor):
+ r"""
+ Parameters
+ ----------
+ xyz : torch.Tensor
+ (B, N0, 3) tensor of the xyz coordinates of the features
+ features : torch.Tensor
+ (B, Cin, N0) tensor of the descriptors of the the features
+ Returns
+ -------
+ new_xyz : torch.Tensor
+ (B, N1, 3) tensor of the new features' xyz
+ new_features : torch.Tensor
+ (B, Cout, N1)) tensor of the new_features descriptors
+ """
+ new_features_list = []
+ xyz_trans = xyz.transpose(1, 2).contiguous()
+ if self.npoint is None:
+ self.npoint = xyz.shape[1] // 4
+ new_xyz_idx = pointops.furthestsampling(xyz, self.npoint) # (B, N1)
+ new_xyz = pointops.gathering(
+ xyz_trans,
+ new_xyz_idx
+ ).transpose(1, 2).contiguous() if self.npoint is not None else None # (B, N1, 3)
+ new_features = features
+ for i in range(len(self.groupers)):
+ for j in range(len(self.mlps[i])):
+ _, grouped_xyz, grouped_idx = self.groupers[i](xyz, new_xyz, new_features)
+ # (B, Cin+3, N1, K), (B, 3, N1, K), (B, N1, K)
+ if self.use_xyz and j == 0:
+ new_features = torch.cat((xyz.permute(0, 2, 1), new_features), dim=1)
+ if isinstance(self.mlps[i], paconv.SharedPAConv):
+ grouped_new_features = self.mlps[i][j]((new_features, grouped_xyz, grouped_idx))[0] # (B, Cout, N1, K)
+ else:
+ raise NotImplementedError
+ if self.agg == 'max':
+ new_features = F.max_pool2d(grouped_new_features, kernel_size=[1, grouped_new_features.size(3)]) # (B, Cout, N1, 1)
+ elif self.agg == 'sum':
+ new_features = torch.sum(grouped_new_features, dim=-1, keepdim=True) # (B, Cout, N1, 1)
+ else:
+ raise ValueError('Not implemented aggregation mode.')
+ xyz = new_xyz
+ new_features = new_features.squeeze(-1).contiguous() # (B, Cout, N1)
+ new_features_list.append(new_features)
+ return new_xyz, torch.cat(new_features_list, dim=1)
+
+
+class PointNet2FPModule(nn.Module):
+ r"""Propagates the features of one set to another
+ Parameters
+ ----------
+ mlp : list
+ Pointnet module parameters
+ bn : bool
+ Use batchnorm
+ """
+ def __init__(self, *, mlp: List[int], bn: bool = True, use_paconv=False, args=None):
+ super().__init__()
+ self.use_paconv = use_paconv
+ if self.use_paconv:
+ self.mlp = paconv.SharedPAConv(mlp, bn=bn, config=args)
+ else:
+ self.mlp = block.SharedMLP(mlp, bn=bn)
+
+ def forward(self, unknown: torch.Tensor, known: torch.Tensor, unknow_feats: torch.Tensor, known_feats: torch.Tensor) -> torch.Tensor:
+ r"""
+ Parameters
+ ----------
+ unknown : torch.Tensor
+ (B, n, 3) tensor of the xyz positions of the unknown features
+ known : torch.Tensor
+ (B, m, 3) tensor of the xyz positions of the known features
+ unknow_feats : torch.Tensor
+ (B, C1, n) tensor of the features to be propigated to
+ known_feats : torch.Tensor
+ (B, C2, m) tensor of features to be propigated
+ Returns
+ -------
+ new_features : torch.Tensor
+ (B, mlp[-1], n) tensor of the features of the unknown features
+ """
+
+ if known is not None:
+ dist, idx = pointops.nearestneighbor(unknown, known)
+ dist_recip = 1.0 / (dist + 1e-8)
+ norm = torch.sum(dist_recip, dim=2, keepdim=True)
+ weight = dist_recip / norm
+ interpolated_feats = pointops.interpolation(known_feats, idx, weight)
+ else:
+ interpolated_feats = known_feats.expand(*known_feats.size()[0:2], unknown.size(1))
+
+ if unknow_feats is not None:
+ new_features = torch.cat([interpolated_feats, unknow_feats], dim=1) # (B, C2 + C1, n)
+ else:
+ new_features = interpolated_feats
+
+ return self.mlp(new_features.unsqueeze(-1)).squeeze(-1)
+
+
+if __name__ == "__main__":
+ torch.manual_seed(1)
+ torch.cuda.manual_seed_all(1)
+ xyz = torch.randn(2, 9, 3, requires_grad=True).cuda()
+ xyz_feats = torch.randn(2, 9, 6, requires_grad=True).cuda()
+
+ test_module = PointNet2SAModuleMSG(npoint=2, radii=[5.0, 10.0], nsamples=[6, 3], mlps=[[9, 3], [9, 6]])
+ test_module.cuda()
+ print(test_module(xyz, xyz_feats))
+
+ # test_module = PointNet2FPModule(mlp=[6, 6])
+ # test_module.cuda()
+ # from torch.autograd import gradcheck
+ # inputs = (xyz, xyz, None, xyz_feats)
+ # test = gradcheck(test_module, inputs, eps=1e-6, atol=1e-4)
+ # print(test)
+
+ for _ in range(1):
+ _, new_features = test_module(xyz, xyz_feats)
+ new_features.backward(torch.cuda.FloatTensor(*new_features.size()).fill_(1))
+ print(new_features)
+ print(xyz.grad)
diff --git a/zoo/PAConv/scene_seg/model/pointnet2/pointnet2_paconv_seg.py b/zoo/PAConv/scene_seg/model/pointnet2/pointnet2_paconv_seg.py
new file mode 100755
index 0000000..01e51e1
--- /dev/null
+++ b/zoo/PAConv/scene_seg/model/pointnet2/pointnet2_paconv_seg.py
@@ -0,0 +1,125 @@
+from collections import namedtuple
+
+import torch
+import torch.nn as nn
+
+from model.pointnet2.pointnet2_paconv_modules import PointNet2FPModule
+from util import block
+
+
+class PointNet2SSGSeg(nn.Module):
+ r"""
+ PointNet2 with single-scale grouping
+ Semantic segmentation network that uses feature propogation layers
+ Parameters
+ ----------
+ k: int
+ Number of semantics classes to predict over -- size of softmax classifier that run for each point
+ c: int = 6
+ Number of input channels in the feature descriptor for each point. If the point cloud is Nx9, this
+ value should be 6 as in an Nx9 point cloud, 3 of the channels are xyz, and 6 are feature descriptors
+ use_xyz: bool = True
+ Whether or not to use the xyz position of a point as a feature
+ """
+
+ def __init__(self, c=3, k=13, use_xyz=True, args=None):
+ super().__init__()
+ self.nsamples = args.get('nsamples', [32, 32, 32, 32])
+ self.npoints = args.get('npoints', [None, None, None, None])
+ self.sa_mlps = args.get('sa_mlps', [[c, 32, 32, 64], [64, 64, 64, 128], [128, 128, 128, 256], [256, 256, 256, 512]])
+ self.fp_mlps = args.get('fp_mlps', [[128 + c, 128, 128, 128], [256 + 64, 256, 128], [256 + 128, 256, 256], [512 + 256, 256, 256]])
+ self.paconv = args.get('pointnet2_paconv', [True, True, True, True, False, False, False, False])
+ self.fc = args.get('fc', 128)
+
+ if args.get('cuda', False):
+ from model.pointnet2.pointnet2_paconv_modules import PointNet2SAModuleCUDA as PointNet2SAModule
+ else:
+ from model.pointnet2.pointnet2_paconv_modules import PointNet2SAModule
+
+ self.SA_modules = nn.ModuleList()
+ self.SA_modules.append(PointNet2SAModule(npoint=self.npoints[0], nsample=self.nsamples[0], mlp=self.sa_mlps[0], use_xyz=use_xyz,
+ use_paconv=self.paconv[0], args=args))
+ self.SA_modules.append(PointNet2SAModule(npoint=self.npoints[1], nsample=self.nsamples[1], mlp=self.sa_mlps[1], use_xyz=use_xyz,
+ use_paconv=self.paconv[1], args=args))
+ self.SA_modules.append(PointNet2SAModule(npoint=self.npoints[2], nsample=self.nsamples[2], mlp=self.sa_mlps[2], use_xyz=use_xyz,
+ use_paconv=self.paconv[2], args=args))
+ self.SA_modules.append(PointNet2SAModule(npoint=self.npoints[3], nsample=self.nsamples[3], mlp=self.sa_mlps[3], use_xyz=use_xyz,
+ use_paconv=self.paconv[3], args=args))
+ self.FP_modules = nn.ModuleList()
+ self.FP_modules.append(PointNet2FPModule(mlp=self.fp_mlps[0], use_paconv=self.paconv[4], args=args))
+ self.FP_modules.append(PointNet2FPModule(mlp=self.fp_mlps[1], use_paconv=self.paconv[5], args=args))
+ self.FP_modules.append(PointNet2FPModule(mlp=self.fp_mlps[2], use_paconv=self.paconv[6], args=args))
+ self.FP_modules.append(PointNet2FPModule(mlp=self.fp_mlps[3], use_paconv=self.paconv[7], args=args))
+ self.FC_layer = nn.Sequential(block.Conv2d(self.fc, self.fc, bn=True), nn.Dropout(), block.Conv2d(self.fc, k, activation=None))
+
+ def _break_up_pc(self, pc):
+ xyz = pc[..., 0:3].contiguous()
+ features = (pc[..., 3:].transpose(1, 2).contiguous() if pc.size(-1) > 3 else None)
+ return xyz, features
+
+ def forward(self, pointcloud: torch.cuda.FloatTensor):
+ r"""
+ Forward pass of the network
+ Parameters
+ ----------
+ pointcloud: Variable(torch.cuda.FloatTensor)
+ (B, N, 3 + input_channels) tensor
+ Point cloud to run predicts on
+ Each point in the point-cloud MUST
+ be formated as (x, y, z, features...)
+ """
+ xyz, features = self._break_up_pc(pointcloud)
+ l_xyz, l_features = [xyz], [features]
+ for i in range(len(self.SA_modules)):
+ li_xyz, li_features = self.SA_modules[i](l_xyz[i], l_features[i])
+ l_xyz.append(li_xyz)
+ l_features.append(li_features)
+ for i in range(-1, -(len(self.FP_modules) + 1), -1):
+ l_features[i - 1] = self.FP_modules[i](l_xyz[i - 1], l_xyz[i], l_features[i - 1], l_features[i])
+ # return self.FC_layer(l_features[0])
+ return self.FC_layer(l_features[0].unsqueeze(-1)).squeeze(-1)
+
+
+def model_fn_decorator(criterion):
+ ModelReturn = namedtuple("ModelReturn", ['preds', 'loss', 'acc'])
+
+ def model_fn(model, data, eval=False):
+ with torch.set_grad_enabled(not eval):
+ inputs, labels = data
+ inputs = inputs.cuda(non_blocking=True)
+ labels = labels.cuda(non_blocking=True)
+ preds = model(inputs)
+ loss = criterion(preds, labels)
+ _, classes = torch.max(preds, 1)
+ acc = (classes == labels).float().sum() / labels.numel()
+ return ModelReturn(preds, loss, {"acc": acc.item(), 'loss': loss.item()})
+ return model_fn
+
+
+if __name__ == "__main__":
+ import torch.optim as optim
+ B, N, C, K = 2, 4096, 3, 13
+ inputs = torch.randn(B, N, 6)#.cuda()
+ labels = torch.randint(0, 3, (B, N))#.cuda()
+
+ model = PointNet2SSGSeg(c=C, k=K)#.cuda()
+ optimizer = optim.SGD(model.parameters(), lr=5e-2, momentum=0.9, weight_decay=1e-4)
+ print("Testing SSGCls with xyz")
+ model_fn = model_fn_decorator(nn.CrossEntropyLoss())
+ for _ in range(5):
+ optimizer.zero_grad()
+ _, loss, _ = model_fn(model, (inputs, labels))
+ loss.backward()
+ print(loss.item())
+ optimizer.step()
+
+ model = PointNet2SSGSeg(c=C, k=K, use_xyz=False).cuda()
+ optimizer = optim.SGD(model.parameters(), lr=5e-2, momentum=0.9, weight_decay=1e-4)
+ print("Testing SSGCls without xyz")
+ model_fn = model_fn_decorator(nn.CrossEntropyLoss())
+ for _ in range(5):
+ optimizer.zero_grad()
+ _, loss, _ = model_fn(model, (inputs, labels))
+ loss.backward()
+ print(loss.item())
+ optimizer.step()
diff --git a/zoo/PAConv/scene_seg/model/pointnet2/pointnet2_seg.py b/zoo/PAConv/scene_seg/model/pointnet2/pointnet2_seg.py
new file mode 100755
index 0000000..62a6b3e
--- /dev/null
+++ b/zoo/PAConv/scene_seg/model/pointnet2/pointnet2_seg.py
@@ -0,0 +1,170 @@
+from collections import namedtuple
+
+import torch
+import torch.nn as nn
+
+from model.pointnet2.pointnet2_modules import PointNet2SAModule, PointNet2SAModuleMSG, PointNet2FPModule
+from util import block
+
+
+class PointNet2SSGSeg(nn.Module):
+ r"""
+ PointNet2 with single-scale grouping
+ Semantic segmentation network that uses feature propogation layers
+ Parameters
+ ----------
+ k: int
+ Number of semantics classes to predict over -- size of softmax classifier that run for each point
+ c: int = 6
+ Number of input channels in the feature descriptor for each point. If the point cloud is Nx9, this
+ value should be 6 as in an Nx9 point cloud, 3 of the channels are xyz, and 6 are feature descriptors
+ use_xyz: bool = True
+ Whether or not to use the xyz position of a point as a feature
+ """
+
+ def __init__(self, c=3, k=13, use_xyz=True, args=None):
+ super().__init__()
+ self.SA_modules = nn.ModuleList()
+ self.SA_modules.append(PointNet2SAModule(npoint=1024, nsample=32, mlp=[c, 32, 32, 64], use_xyz=use_xyz))
+ self.SA_modules.append(PointNet2SAModule(npoint=256, nsample=32, mlp=[64, 64, 64, 128], use_xyz=use_xyz))
+ self.SA_modules.append(PointNet2SAModule(npoint=64, nsample=32, mlp=[128, 128, 128, 256], use_xyz=use_xyz))
+ self.SA_modules.append(PointNet2SAModule(npoint=16, nsample=32, mlp=[256, 256, 256, 512], use_xyz=use_xyz))
+ self.FP_modules = nn.ModuleList()
+ self.FP_modules.append(PointNet2FPModule(mlp=[128 + c, 128, 128, 128]))
+ self.FP_modules.append(PointNet2FPModule(mlp=[256 + 64, 256, 128]))
+ self.FP_modules.append(PointNet2FPModule(mlp=[256 + 128, 256, 256]))
+ self.FP_modules.append(PointNet2FPModule(mlp=[512 + 256, 256, 256]))
+ self.FC_layer = nn.Sequential(block.Conv2d(128, 128, bn=True), nn.Dropout(), block.Conv2d(128, k, activation=None))
+
+ def _break_up_pc(self, pc):
+ xyz = pc[..., 0:3].contiguous()
+ features = (pc[..., 3:].transpose(1, 2).contiguous() if pc.size(-1) > 3 else None)
+ return xyz, features
+
+ def forward(self, pointcloud: torch.cuda.FloatTensor):
+ r"""
+ Forward pass of the network
+ Parameters
+ ----------
+ pointcloud: Variable(torch.cuda.FloatTensor)
+ (B, N, 3 + input_channels) tensor
+ Point cloud to run predicts on
+ Each point in the point-cloud MUST
+ be formated as (x, y, z, features...)
+ """
+ xyz, features = self._break_up_pc(pointcloud)
+ l_xyz, l_features = [xyz], [features]
+ for i in range(len(self.SA_modules)):
+ li_xyz, li_features = self.SA_modules[i](l_xyz[i], l_features[i])
+ l_xyz.append(li_xyz)
+ l_features.append(li_features)
+ for i in range(-1, -(len(self.FP_modules) + 1), -1):
+ l_features[i - 1] = self.FP_modules[i](l_xyz[i - 1], l_xyz[i], l_features[i - 1], l_features[i])
+ # return self.FC_layer(l_features[0])
+ return self.FC_layer(l_features[0].unsqueeze(-1)).squeeze(-1)
+
+
+class PointNet2MSGSeg(PointNet2SSGSeg):
+ r"""
+ PointNet2 with multi-scale grouping
+ Semantic segmentation network that uses feature propogation layers
+ Parameters
+ ----------
+ k: int
+ Number of semantics classes to predict over -- size of softmax classifier that run for each point
+ c: int = 6
+ Number of input channels in the feature descriptor for each point. If the point cloud is Nx9, this
+ value should be 6 as in an Nx9 point cloud, 3 of the channels are xyz, and 6 are feature descriptors
+ use_xyz: bool = True
+ Whether or not to use the xyz position of a point as a feature
+ """
+
+ def __init__(self, k, c=6, use_xyz=True):
+ super().__init__()
+ self.SA_modules = nn.ModuleList()
+ c_in = c
+ self.SA_modules.append(PointNet2SAModuleMSG(npoint=1024, radii=[0.05, 0.1], nsamples=[16, 32], mlps=[[c_in, 16, 16, 32], [c_in, 32, 32, 64]], use_xyz=use_xyz ))
+ c_out_0 = 32 + 64
+ c_in = c_out_0
+ self.SA_modules.append(PointNet2SAModuleMSG(npoint=256, radii=[0.1, 0.2], nsamples=[16, 32], mlps=[[c_in, 64, 64, 128], [c_in, 64, 96, 128]], use_xyz=use_xyz))
+ c_out_1 = 128 + 128
+ c_in = c_out_1
+ self.SA_modules.append(PointNet2SAModuleMSG(npoint=64, radii=[0.2, 0.4], nsamples=[16, 32], mlps=[[c_in, 128, 196, 256], [c_in, 128, 196, 256]], use_xyz=use_xyz))
+ c_out_2 = 256 + 256
+ c_in = c_out_2
+ self.SA_modules.append(PointNet2SAModuleMSG(npoint=16, radii=[0.4, 0.8], nsamples=[16, 32], mlps=[[c_in, 256, 256, 512], [c_in, 256, 384, 512]], use_xyz=use_xyz))
+ c_out_3 = 512 + 512
+ self.FP_modules = nn.ModuleList()
+ self.FP_modules.append(PointNet2FPModule(mlp=[256 + c, 128, 128]))
+ self.FP_modules.append(PointNet2FPModule(mlp=[512 + c_out_0, 256, 256]))
+ self.FP_modules.append(PointNet2FPModule(mlp=[512 + c_out_1, 512, 512]))
+ self.FP_modules.append(PointNet2FPModule(mlp=[c_out_3 + c_out_2, 512, 512]))
+ self.FC_layer = nn.Sequential(block.Conv2d(128, 128, bn=True), nn.Dropout(), block.Conv2d(128, k, activation=None))
+
+
+def model_fn_decorator(criterion):
+ ModelReturn = namedtuple("ModelReturn", ['preds', 'loss', 'acc'])
+
+ def model_fn(model, data, eval=False):
+ with torch.set_grad_enabled(not eval):
+ inputs, labels = data
+ inputs = inputs.cuda(non_blocking=True)
+ labels = labels.cuda(non_blocking=True)
+ preds = model(inputs)
+ loss = criterion(preds, labels)
+ _, classes = torch.max(preds, 1)
+ acc = (classes == labels).float().sum() / labels.numel()
+ return ModelReturn(preds, loss, {"acc": acc.item(), 'loss': loss.item()})
+ return model_fn
+
+
+if __name__ == "__main__":
+ import torch.optim as optim
+ B, N, C, K = 2, 4096, 3, 13
+ inputs = torch.randn(B, N, 6)#.cuda()
+ labels = torch.randint(0, 3, (B, N))#.cuda()
+
+ model = PointNet2SSGSeg(c=C, k=K)#.cuda()
+ optimizer = optim.SGD(model.parameters(), lr=5e-2, momentum=0.9, weight_decay=1e-4)
+ print("Testing SSGCls with xyz")
+ model_fn = model_fn_decorator(nn.CrossEntropyLoss())
+ for _ in range(5):
+ optimizer.zero_grad()
+ _, loss, _ = model_fn(model, (inputs, labels))
+ loss.backward()
+ print(loss.item())
+ optimizer.step()
+
+ model = PointNet2SSGSeg(c=C, k=K, use_xyz=False).cuda()
+ optimizer = optim.SGD(model.parameters(), lr=5e-2, momentum=0.9, weight_decay=1e-4)
+ print("Testing SSGCls without xyz")
+ model_fn = model_fn_decorator(nn.CrossEntropyLoss())
+ for _ in range(5):
+ optimizer.zero_grad()
+ _, loss, _ = model_fn(model, (inputs, labels))
+ loss.backward()
+ print(loss.item())
+ optimizer.step()
+
+ model = PointNet2MSGSeg(c=C, k=K).cuda()
+ optimizer = optim.SGD(model.parameters(), lr=5e-2, momentum=0.9, weight_decay=1e-4)
+ print("Testing MSGCls with xyz")
+ model_fn = model_fn_decorator(nn.CrossEntropyLoss())
+ for _ in range(5):
+ optimizer.zero_grad()
+ _, loss, _ = model_fn(model, (inputs, labels))
+ loss.backward()
+ print(loss.item())
+ optimizer.step()
+
+ model = PointNet2MSGSeg(c=C, k=K, use_xyz=False).cuda()
+ optimizer = optim.SGD(model.parameters(), lr=5e-2, momentum=0.9, weight_decay=1e-4)
+ print("Testing MSGCls without xyz")
+ model_fn = model_fn_decorator(nn.CrossEntropyLoss())
+ for _ in range(5):
+ optimizer.zero_grad()
+ _, loss, _ = model_fn(model, (inputs, labels))
+ loss.backward()
+ print(loss.item())
+ optimizer.step()
+
diff --git a/zoo/PAConv/scene_seg/tool/test.sh b/zoo/PAConv/scene_seg/tool/test.sh
new file mode 100755
index 0000000..2814e39
--- /dev/null
+++ b/zoo/PAConv/scene_seg/tool/test.sh
@@ -0,0 +1,18 @@
+#!/bin/sh
+export PYTHONPATH=./
+
+PYTHON=python
+dataset=$1
+exp_name=$2
+exp_dir=exp/${dataset}/${exp_name}
+model_dir=${exp_dir}/model
+config=config/${dataset}/${dataset}_${exp_name}.yaml
+
+mkdir -p ${model_dir}
+now=$(date +"%Y%m%d_%H%M%S")
+
+if [ ${dataset} = 's3dis' ]
+then
+ cp tool/test.sh tool/test_s3dis.py ${config} ${exp_dir}
+ $PYTHON tool/test_s3dis.py --config=${config} 2>&1 | tee ${model_dir}/test-$now.log
+fi
diff --git a/zoo/PAConv/scene_seg/tool/test_s3dis.py b/zoo/PAConv/scene_seg/tool/test_s3dis.py
new file mode 100644
index 0000000..969fcad
--- /dev/null
+++ b/zoo/PAConv/scene_seg/tool/test_s3dis.py
@@ -0,0 +1,198 @@
+import os
+import time
+import random
+import numpy as np
+import logging
+import pickle
+import argparse
+
+import torch
+import torch.nn as nn
+import torch.nn.parallel
+import torch.optim
+import torch.utils.data
+
+from util.util import AverageMeter, intersectionAndUnion, check_makedirs, get_parser, get_logger
+
+random.seed(123)
+np.random.seed(123)
+
+
+def main():
+ global args, logger
+ args = get_parser()
+ logger = get_logger()
+ logger.info(args)
+ assert args.classes > 1
+ os.environ["CUDA_VISIBLE_DEVICES"] = ','.join(str(x) for x in args.test_gpu)
+ logger.info("=> creating model ...")
+ logger.info("Classes: {}".format(args.classes))
+
+ if args.arch == 'pointnet_seg':
+ from model.pointnet.pointnet import PointNetSeg as Model
+ elif args.arch == 'pointnet2_seg':
+ from model.pointnet2.pointnet2_seg import PointNet2SSGSeg as Model
+ elif args.arch == 'pointnet2_paconv_seg':
+ from model.pointnet2.pointnet2_paconv_seg import PointNet2SSGSeg as Model
+ else:
+ raise Exception('architecture not supported yet'.format(args.arch))
+ model = Model(c=args.fea_dim, k=args.classes, use_xyz=args.use_xyz, args=args)
+ model = torch.nn.DataParallel(model.cuda())
+ logger.info(model)
+ criterion = nn.CrossEntropyLoss(ignore_index=args.ignore_label).cuda()
+ names = [line.rstrip('\n') for line in open(args.names_path)]
+ if os.path.isfile(args.model_path):
+ logger.info("=> loading checkpoint '{}'".format(args.model_path))
+ checkpoint = torch.load(args.model_path)
+ model.load_state_dict(checkpoint['state_dict'], strict=True)
+ logger.info("=> loaded checkpoint '{}'".format(args.model_path))
+ else:
+ raise RuntimeError("=> no checkpoint found at '{}'".format(args.model_path))
+ test(model, criterion, names)
+
+
+def data_prepare(room_path):
+ room_data = np.load(room_path)
+ points, labels = room_data[:, 0:6], room_data[:, 6] # xyzrgb, N*6; l, N
+ coord_min, coord_max = np.amin(points, axis=0)[:3], np.amax(points, axis=0)[:3]
+ stride = args.block_size * args.stride_rate
+ grid_x = int(np.ceil(float(coord_max[0] - coord_min[0] - args.block_size) / stride) + 1)
+ grid_y = int(np.ceil(float(coord_max[1] - coord_min[1] - args.block_size) / stride) + 1)
+ data_room, label_room, index_room = np.array([]), np.array([]), np.array([])
+ for index_y in range(0, grid_y):
+ for index_x in range(0, grid_x):
+ s_x = coord_min[0] + index_x * stride
+ e_x = min(s_x + args.block_size, coord_max[0])
+ s_x = e_x - args.block_size
+ s_y = coord_min[1] + index_y * stride
+ e_y = min(s_y + args.block_size, coord_max[1])
+ s_y = e_y - args.block_size
+ point_idxs = np.where((points[:, 0] >= s_x - 1e-8) & (points[:, 0] <= e_x + 1e-8) & (points[:, 1] >= s_y - 1e-8) & (points[:, 1] <= e_y + 1e-8))[0]
+ if point_idxs.size == 0:
+ continue
+ num_batch = int(np.ceil(point_idxs.size / args.num_point))
+ point_size = int(num_batch * args.num_point)
+ replace = False if (point_size - point_idxs.size <= point_idxs.size) else True
+ point_idxs_repeat = np.random.choice(point_idxs, point_size - point_idxs.size, replace=replace)
+ point_idxs = np.concatenate((point_idxs, point_idxs_repeat))
+ np.random.shuffle(point_idxs)
+ data_batch = points[point_idxs, :]
+ normlized_xyz = np.zeros((point_size, 3))
+ normlized_xyz[:, 0] = data_batch[:, 0] / coord_max[0]
+ normlized_xyz[:, 1] = data_batch[:, 1] / coord_max[1]
+ normlized_xyz[:, 2] = data_batch[:, 2] / coord_max[2]
+ data_batch[:, 0] = data_batch[:, 0] - (s_x + args.block_size / 2.0)
+ data_batch[:, 1] = data_batch[:, 1] - (s_y + args.block_size / 2.0)
+ data_batch[:, 3:6] /= 255.0
+
+ fea_dim = args.get('fea_dim', 6)
+ if fea_dim == 3:
+ data_batch = data_batch
+ elif fea_dim == 6:
+ data_batch = np.concatenate((data_batch, normlized_xyz), axis=-1)
+ label_batch = labels[point_idxs]
+ data_room = np.vstack([data_room, data_batch]) if data_room.size else data_batch
+ label_room = np.hstack([label_room, label_batch]) if label_room.size else label_batch
+ index_room = np.hstack([index_room, point_idxs]) if index_room.size else point_idxs
+ assert np.unique(index_room).size == labels.size
+ return data_room, label_room, index_room, labels
+
+
+def test(model, criterion, names):
+ logger.info('>>>>>>>>>>>>>>>> Start Evaluation >>>>>>>>>>>>>>>>')
+ batch_time = AverageMeter()
+ intersection_meter = AverageMeter()
+ union_meter = AverageMeter()
+ target_meter = AverageMeter()
+
+ model.eval()
+ rooms = sorted(os.listdir(args.train_full_folder))
+ rooms_split = [room for room in rooms if 'Area_{}'.format(args.test_area) in room]
+ gt_all, pred_all = np.array([]), np.array([])
+ check_makedirs(args.save_folder)
+ pred_save, gt_save = [], []
+ for idx, room_name in enumerate(rooms_split):
+ data_room, label_room, index_room, gt = data_prepare(os.path.join(args.train_full_folder, room_name))
+ batch_point = args.num_point * args.test_batch_size
+ batch_num = int(np.ceil(label_room.size / batch_point))
+ end = time.time()
+ output_room = np.array([])
+ for i in range(batch_num):
+ s_i, e_i = i * batch_point, min((i + 1) * batch_point, label_room.size)
+ input, target, index = data_room[s_i:e_i, :], label_room[s_i:e_i], index_room[s_i:e_i]
+ input = torch.from_numpy(input).float().view(-1, args.num_point, input.shape[1])
+ target = torch.from_numpy(target).long().view(-1, args.num_point)
+ with torch.no_grad():
+ output = model(input.cuda())
+ loss = criterion(output, target.cuda()) # for reference
+ output = output.transpose(1, 2).contiguous().view(-1, args.classes).data.cpu().numpy()
+ pred = np.argmax(output, axis=1)
+ intersection, union, target = intersectionAndUnion(pred, target.view(-1).data.cpu().numpy(), args.classes, args.ignore_label)
+ accuracy = sum(intersection) / (sum(target) + 1e-10)
+ output_room = np.vstack([output_room, output]) if output_room.size else output
+ batch_time.update(time.time() - end)
+ end = time.time()
+ if ((i + 1) % args.print_freq == 0) or (i + 1 == batch_num):
+ logger.info('Test: [{}/{}]-[{}/{}] '
+ 'Batch {batch_time.val:.3f} ({batch_time.avg:.3f}) '
+ 'Loss {loss:.4f} '
+ 'Accuracy {accuracy:.4f} '
+ 'Points {gt.size}.'.format(idx + 1, len(rooms_split),
+ i + 1, batch_num,
+ batch_time=batch_time,
+ loss=loss,
+ accuracy=accuracy,
+ gt=gt))
+ '''
+ unq, unq_inv, unq_cnt = np.unique(index_room, return_inverse=True, return_counts=True)
+ index_array = np.split(np.argsort(unq_inv), np.cumsum(unq_cnt[:-1]))
+ output_room = np.vstack([output_room, np.zeros((1, args.classes))])
+ index_array_fill = np.array(list(itertools.zip_longest(*index_array, fillvalue=output_room.shape[0] - 1))).T
+ pred = output_room[index_array_fill].sum(1)
+ pred = np.argmax(pred, axis=1)
+ '''
+ pred = np.zeros((gt.size, args.classes))
+ for j in range(len(index_room)):
+ pred[index_room[j]] += output_room[j]
+ pred = np.argmax(pred, axis=1)
+
+ # calculation 1: add per room predictions
+ intersection, union, target = intersectionAndUnion(pred, gt, args.classes, args.ignore_label)
+ intersection_meter.update(intersection)
+ union_meter.update(union)
+ target_meter.update(target)
+ # calculation 2
+ pred_all = np.hstack([pred_all, pred]) if pred_all.size else pred
+ gt_all = np.hstack([gt_all, gt]) if gt_all.size else gt
+ pred_save.append(pred), gt_save.append(gt)
+
+ with open(os.path.join(args.save_folder, "pred_{}.pickle".format(args.test_area)), 'wb') as handle:
+ pickle.dump({'pred': pred_save}, handle, protocol=pickle.HIGHEST_PROTOCOL)
+ with open(os.path.join(args.save_folder, "gt_{}.pickle".format(args.test_area)), 'wb') as handle:
+ pickle.dump({'gt': gt_save}, handle, protocol=pickle.HIGHEST_PROTOCOL)
+
+ # calculation 1
+ iou_class = intersection_meter.sum / (union_meter.sum + 1e-10)
+ accuracy_class = intersection_meter.sum / (target_meter.sum + 1e-10)
+ mIoU1 = np.mean(iou_class)
+ mAcc1 = np.mean(accuracy_class)
+ allAcc1 = sum(intersection_meter.sum) / (sum(target_meter.sum) + 1e-10)
+
+ # calculation 2
+ intersection, union, target = intersectionAndUnion(pred_all, gt_all, args.classes, args.ignore_label)
+ iou_class = intersection / (union + 1e-10)
+ accuracy_class = intersection / (target + 1e-10)
+ mIoU = np.mean(iou_class)
+ mAcc = np.mean(accuracy_class)
+ allAcc = sum(intersection) / (sum(target) + 1e-10)
+ logger.info('Val result: mIoU/mAcc/allAcc {:.4f}/{:.4f}/{:.4f}.'.format(mIoU, mAcc, allAcc))
+ logger.info('Val1 result: mIoU/mAcc/allAcc {:.4f}/{:.4f}/{:.4f}.'.format(mIoU1, mAcc1, allAcc1))
+
+ for i in range(args.classes):
+ logger.info('Class_{} Result: iou/accuracy {:.4f}/{:.4f}, name: {}.'.format(i, iou_class[i], accuracy_class[i], names[i]))
+ logger.info('<<<<<<<<<<<<<<<<< End Evaluation <<<<<<<<<<<<<<<<<')
+ return mIoU, mAcc, allAcc, pred_all
+
+
+if __name__ == '__main__':
+ main()
diff --git a/zoo/PAConv/scene_seg/tool/test_s3dis_6fold.py b/zoo/PAConv/scene_seg/tool/test_s3dis_6fold.py
new file mode 100644
index 0000000..ef368aa
--- /dev/null
+++ b/zoo/PAConv/scene_seg/tool/test_s3dis_6fold.py
@@ -0,0 +1,91 @@
+import os
+import numpy as np
+import pickle5 as pickle
+import logging
+
+from util.util import AverageMeter, intersectionAndUnion, check_makedirs
+
+
+def get_logger():
+ logger_name = "main-logger"
+ logger = logging.getLogger(logger_name)
+ logger.setLevel(logging.INFO)
+ handler = logging.StreamHandler()
+ fmt = "[%(asctime)s %(levelname)s %(filename)s line %(lineno)d %(process)d] %(message)s"
+ handler.setFormatter(logging.Formatter(fmt))
+ logger.addHandler(handler)
+ return logger
+
+
+def get_color(i):
+ ''' Parse a 24-bit integer as a RGB color. I.e. Convert to base 256
+ Args:
+ index: An int. The first 24 bits will be interpreted as a color.
+ Negative values will not work properly.
+ Returns:
+ color: A color s.t. get_index( get_color( i ) ) = i
+ '''
+ b = (i) % 256 # least significant byte
+ g = (i >> 8) % 256
+ r = (i >> 16) % 256 # most significant byte
+ return r, g, b
+
+
+def main():
+ global logger
+ logger = get_logger()
+
+ classes = 13
+ color_map = np.zeros((classes, 3))
+ names = [line.rstrip('\n') for line in open('data/s3dis/s3dis_names.txt')]
+ for i in range(classes):
+ color_map[i, :] = get_color(i)
+ data_root = 'dataset/s3dis/trainval_fullarea'
+ data_list = sorted(os.listdir(data_root))
+ data_list = [item[:-4] for item in data_list if 'Area_' in item]
+ intersection_meter, union_meter, target_meter = AverageMeter(), AverageMeter(), AverageMeter()
+
+ logger.info('<<<<<<<<<<<<<<<<< Start Evaluation <<<<<<<<<<<<<<<<<')
+ test_area = [1, 2, 3, 4, 5, 6]
+ for i in range(len(test_area)):
+ # result_path = os.path.join('exp/s3dis', exp_list[test_area[i]-1], 'result')
+ result_path = '/exp/s3dis/6-fold' # where to save all result files
+ # pred_save_folder = os.path.join(result_path, 'best_visual/pred')
+ # label_save_folder = os.path.join(result_path, 'best_visual/label')
+ # image_save_folder = os.path.join(result_path, 'best_visual/image')
+ # check_makedirs(pred_save_folder); check_makedirs(label_save_folder); check_makedirs(image_save_folder)
+ with open(os.path.join(result_path, 'pred_{}'.format(test_area[i]) + '.pickle'), 'rb') as handle:
+ pred = pickle.load(handle)['pred']
+ with open(os.path.join(result_path, 'gt_{}'.format(test_area[i]) + '.pickle'), 'rb') as handle:
+ label = pickle.load(handle)['gt']
+ data_split = [item for item in data_list if 'Area_{}'.format(test_area[i]) in item]
+ assert len(pred) == len(label) == len(data_split)
+ for j in range(len(data_split)):
+ print('processing [{}/{}]-[{}/{}]'.format(i+1, len(test_area), j+1, len(data_split)))
+ # data_name = data_split[j]
+ # data = np.load(os.path.join(data_root, data_name + '.npy'))
+ # coord, feat = data[:, :3], data[:, 3:6]
+ pred_j, label_j = pred[j].astype(np.uint8), label[j].astype(np.uint8)
+ # pred_j_color, label_j_color = color_map[pred_j, :], color_map[label_j, :]
+ # vis_util.write_ply_color(coord, pred_j, os.path.join(pred_save_folder, data_name +'.obj'))
+ # vis_util.write_ply_color(coord, label_j, os.path.join(label_save_folder, data_name + '.obj'))
+ # vis_util.write_ply_rgb(coord, feat, os.path.join(image_save_folder, data_name + '.obj'))
+ intersection, union, target = intersectionAndUnion(pred_j, label_j, classes, ignore_index=255)
+ intersection_meter.update(intersection)
+ union_meter.update(union)
+ target_meter.update(target)
+
+ iou_class = intersection_meter.sum / (union_meter.sum + 1e-10)
+ accuracy_class = intersection_meter.sum / (target_meter.sum + 1e-10)
+ mIoU = np.mean(iou_class)
+ mAcc = np.mean(accuracy_class)
+ allAcc = sum(intersection_meter.sum) / (sum(target_meter.sum) + 1e-10)
+ logger.info('Val result: mIoU/mAcc/allAcc {:.4f}/{:.4f}/{:.4f}.'.format(mIoU, mAcc, allAcc))
+
+ for i in range(classes):
+ logger.info('Class_{} Result: iou/accuracy {:.4f}/{:.4f}, name: {}.'.format(i, iou_class[i], accuracy_class[i], names[i]))
+ logger.info('<<<<<<<<<<<<<<<<< End Evaluation <<<<<<<<<<<<<<<<<')
+
+
+if __name__ == '__main__':
+ main()
diff --git a/zoo/PAConv/scene_seg/tool/train.py b/zoo/PAConv/scene_seg/tool/train.py
new file mode 100755
index 0000000..b4b64af
--- /dev/null
+++ b/zoo/PAConv/scene_seg/tool/train.py
@@ -0,0 +1,318 @@
+import os
+import time
+import random
+import numpy as np
+import subprocess
+
+import torch
+import torch.backends.cudnn as cudnn
+import torch.nn as nn
+import torch.nn.parallel
+import torch.optim
+import torch.utils.data
+import torch.optim.lr_scheduler as lr_scheduler
+from tensorboardX import SummaryWriter
+
+from util import dataset, transform
+from util.s3dis import S3DIS
+from util.util import AverageMeter, intersectionAndUnionGPU, get_logger, get_parser
+from model.pointnet2.paconv import PAConv
+
+
+def worker_init_fn(worker_id):
+ random.seed(args.manual_seed + worker_id)
+
+
+def init():
+ global args, logger, writer
+ args = get_parser()
+ logger = get_logger()
+ writer = SummaryWriter(args.save_path)
+ if args.train_gpu is not None:
+ os.environ["CUDA_VISIBLE_DEVICES"] = ','.join(str(x) for x in args.train_gpu)
+ if args.manual_seed is not None:
+ cudnn.benchmark = False
+ cudnn.deterministic = True
+ random.seed(args.manual_seed)
+ np.random.seed(args.manual_seed)
+ torch.manual_seed(args.manual_seed)
+ torch.cuda.manual_seed(args.manual_seed)
+ torch.cuda.manual_seed_all(args.manual_seed)
+ if args.train_gpu is not None and len(args.train_gpu) == 1:
+ args.sync_bn = False
+ logger.info(args)
+
+
+def get_git_commit_id():
+ if not os.path.exists('.git'):
+ return '0000000'
+ cmd_out = subprocess.run(['git', 'rev-parse', 'HEAD'], stdout=subprocess.PIPE)
+ git_commit_id = cmd_out.stdout.decode('utf-8')[:7]
+ return git_commit_id
+
+
+def main():
+ init()
+ if args.arch == 'pointnet_seg':
+ from model.pointnet.pointnet import PointNetSeg as Model
+ elif args.arch == 'pointnet2_seg':
+ from model.pointnet2.pointnet2_seg import PointNet2SSGSeg as Model
+ elif args.arch == 'pointnet2_paconv_seg':
+ from model.pointnet2.pointnet2_paconv_seg import PointNet2SSGSeg as Model
+ else:
+ raise Exception('architecture not supported yet'.format(args.arch))
+ model = Model(c=args.fea_dim, k=args.classes, use_xyz=args.use_xyz, args=args)
+
+ best_mIoU = 0.0
+
+ if args.sync_bn:
+ from util.util import convert_to_syncbn
+ convert_to_syncbn(model)
+ criterion = nn.CrossEntropyLoss(ignore_index=args.ignore_label).cuda()
+ optimizer = torch.optim.SGD(model.parameters(), lr=args.base_lr, momentum=args.momentum, weight_decay=args.weight_decay)
+ if args.get('lr_multidecay', False):
+ scheduler = lr_scheduler.MultiStepLR(optimizer, milestones=[int(args.epochs * 0.6), int(args.epochs * 0.8)], gamma=args.multiplier)
+ else:
+ scheduler = lr_scheduler.StepLR(optimizer, step_size=args.step_epoch, gamma=args.multiplier)
+ logger.info("=> creating model ...")
+ logger.info("Classes: {}".format(args.classes))
+ logger.info(model)
+ model = torch.nn.DataParallel(model.cuda())
+ if args.sync_bn:
+ from lib.sync_bn import patch_replication_callback
+ patch_replication_callback(model)
+ if args.weight:
+ if os.path.isfile(args.weight):
+ logger.info("=> loading weight '{}'".format(args.weight))
+ checkpoint = torch.load(args.weight)
+ model.load_state_dict(checkpoint['state_dict'])
+ logger.info("=> loaded weight '{}'".format(args.weight))
+ else:
+ logger.info("=> no weight found at '{}'".format(args.weight))
+
+ if args.resume:
+ if os.path.isfile(args.resume):
+ logger.info("=> loading checkpoint '{}'".format(args.resume))
+ checkpoint = torch.load(args.resume, map_location=lambda storage, loc: storage.cuda())
+ args.start_epoch = checkpoint['epoch']
+ model.load_state_dict(checkpoint['state_dict'])
+ optimizer.load_state_dict(checkpoint['optimizer'])
+ scheduler.load_state_dict(checkpoint['scheduler'])
+ try:
+ best_mIoU = checkpoint['val_mIoU']
+ except Exception:
+ pass
+ logger.info("=> loaded checkpoint '{}' (epoch {})".format(args.resume, checkpoint['epoch']))
+ else:
+ logger.info("=> no checkpoint found at '{}'".format(args.resume))
+
+ if args.get('no_transformation', True):
+ train_transform = None
+ else:
+ train_transform = transform.Compose([transform.RandomRotate(along_z=args.get('rotate_along_z', True)),
+ transform.RandomScale(scale_low=args.get('scale_low', 0.8),
+ scale_high=args.get('scale_high', 1.2)),
+ transform.RandomJitter(sigma=args.get('jitter_sigma', 0.01),
+ clip=args.get('jitter_clip', 0.05)),
+ transform.RandomDropColor(color_augment=args.get('color_augment', 0.0))])
+ logger.info(train_transform)
+ if args.data_name == 's3dis':
+ train_data = S3DIS(split='train', data_root=args.train_full_folder, num_point=args.num_point,
+ test_area=args.test_area, block_size=args.block_size, sample_rate=args.sample_rate, transform=train_transform,
+ fea_dim=args.get('fea_dim', 6), shuffle_idx=args.get('shuffle_idx', False))
+ else:
+ raise ValueError('{} dataset not supported.'.format(args.data_name))
+ train_loader = torch.utils.data.DataLoader(train_data, batch_size=args.train_batch_size, shuffle=True, num_workers=args.train_workers, pin_memory=True, drop_last=True)
+
+ val_loader = None
+ if args.evaluate:
+ val_transform = transform.Compose([transform.ToTensor()])
+ if args.data_name == 's3dis':
+ val_data = dataset.PointData(split='val', data_root=args.data_root, data_list=args.val_list, transform=val_transform,
+ norm_as_feat=args.get('norm_as_feat', True), fea_dim=args.get('fea_dim', 6))
+ else:
+ raise ValueError('{} dataset not supported.'.format(args.data_name))
+
+ val_loader = torch.utils.data.DataLoader(val_data, batch_size=args.train_batch_size_val, shuffle=False, num_workers=args.train_workers, pin_memory=True)
+
+ for epoch in range(args.start_epoch, args.epochs):
+ loss_train, mIoU_train, mAcc_train, allAcc_train = train(train_loader, model, criterion, optimizer, epoch, args.get('correlation_loss', False))
+ epoch_log = epoch + 1
+ writer.add_scalar('loss_train', loss_train, epoch_log)
+ writer.add_scalar('mIoU_train', mIoU_train, epoch_log)
+ writer.add_scalar('mAcc_train', mAcc_train, epoch_log)
+ writer.add_scalar('allAcc_train', allAcc_train, epoch_log)
+
+ if epoch_log % args.save_freq == 0:
+ filename = args.save_path + '/train_epoch_' + str(epoch_log) + '.pth'
+ logger.info('Saving checkpoint to: ' + filename)
+ torch.save({'epoch': epoch_log, 'state_dict': model.state_dict(),
+ 'optimizer': optimizer.state_dict(), 'scheduler': scheduler.state_dict(),
+ 'commit_id': get_git_commit_id()}, filename)
+ if epoch_log / args.save_freq > 2:
+ try:
+ deletename = args.save_path + '/train_epoch_' + str(epoch_log - args.save_freq * 2) + '.pth'
+ os.remove(deletename)
+ except Exception:
+ logger.info('{} Not found.'.format(deletename))
+
+ if args.evaluate and epoch_log % args.get('eval_freq', 1) == 0:
+ loss_val, mIoU_val, mAcc_val, allAcc_val = validate(val_loader, model, criterion)
+ writer.add_scalar('loss_val', loss_val, epoch_log)
+ writer.add_scalar('mIoU_val', mIoU_val, epoch_log)
+ writer.add_scalar('mAcc_val', mAcc_val, epoch_log)
+ writer.add_scalar('allAcc_val', allAcc_val, epoch_log)
+ if mIoU_val > best_mIoU:
+ best_mIoU = mIoU_val
+ filename = args.save_path + '/best_train.pth'
+ logger.info('Best Model Saving checkpoint to: ' + filename)
+ torch.save(
+ {'epoch': epoch_log, 'state_dict': model.state_dict(),
+ 'optimizer': optimizer.state_dict(), 'scheduler': scheduler.state_dict(),
+ 'val_mIoU': best_mIoU, 'commit_id': get_git_commit_id()}, filename)
+ scheduler.step()
+
+
+def train(train_loader, model, criterion, optimizer, epoch, correlation_loss):
+ batch_time = AverageMeter()
+ data_time = AverageMeter()
+ loss_meter = AverageMeter()
+ main_loss_meter = AverageMeter()
+ corr_loss_meter = AverageMeter()
+ intersection_meter = AverageMeter()
+ union_meter = AverageMeter()
+ target_meter = AverageMeter()
+
+ model.train()
+ end = time.time()
+ max_iter = args.epochs * len(train_loader)
+ for i, (input, target) in enumerate(train_loader):
+ data_time.update(time.time() - end)
+ input = input.cuda(non_blocking=True)
+ target = target.cuda(non_blocking=True)
+ output = model(input)
+ if target.shape[-1] == 1:
+ target = target[:, 0] # for cls
+ main_loss = criterion(output, target)
+
+ corr_loss = 0.0
+ corr_loss_scale = args.get('correlation_loss_scale', 10.0)
+ if correlation_loss:
+ for m in model.module.SA_modules.named_modules():
+ if isinstance(m[-1], PAConv):
+ kernel_matrice, output_dim, m_dim = m[-1].weightbank, m[-1].output_dim, m[-1].m
+ new_kernel_matrice = kernel_matrice.view(-1, m_dim, output_dim).permute(1, 0, 2).reshape(m_dim, -1)
+ cost_matrice = torch.matmul(new_kernel_matrice, new_kernel_matrice.T) / torch.matmul(
+ torch.sqrt(torch.sum(new_kernel_matrice ** 2, dim=-1, keepdim=True)),
+ torch.sqrt(torch.sum(new_kernel_matrice.T ** 2, dim=0, keepdim=True)))
+ corr_loss += torch.sum(torch.triu(cost_matrice, diagonal=1) ** 2)
+ loss = main_loss + corr_loss_scale * corr_loss
+ optimizer.zero_grad()
+ loss.backward()
+ optimizer.step()
+
+ output = output.max(1)[1]
+ intersection, union, target = intersectionAndUnionGPU(output, target, args.classes, args.ignore_label)
+ intersection, union, target = intersection.cpu().numpy(), union.cpu().numpy(), target.cpu().numpy()
+ intersection_meter.update(intersection), union_meter.update(union), target_meter.update(target)
+
+ accuracy = sum(intersection_meter.val) / (sum(target_meter.val) + 1e-10)
+ loss_meter.update(loss.item(), input.size(0))
+ main_loss_meter.update(main_loss.item(), input.size(0))
+ corr_loss_meter.update(corr_loss.item() * corr_loss_scale if correlation_loss else corr_loss, input.size(0))
+ batch_time.update(time.time() - end)
+ end = time.time()
+
+ # calculate remain time
+ current_iter = epoch * len(train_loader) + i + 1
+ remain_iter = max_iter - current_iter
+ remain_time = remain_iter * batch_time.avg
+ t_m, t_s = divmod(remain_time, 60)
+ t_h, t_m = divmod(t_m, 60)
+ remain_time = '{:02d}:{:02d}:{:02d}'.format(int(t_h), int(t_m), int(t_s))
+
+ if (i + 1) % args.print_freq == 0:
+ logger.info('Epoch: [{}/{}][{}/{}] '
+ 'Data {data_time.val:.3f} ({data_time.avg:.3f}) '
+ 'Batch {batch_time.val:.3f} ({batch_time.avg:.3f}) '
+ 'Remain {remain_time} '
+ 'Loss {loss_meter.val:.4f} '
+ 'Main Loss {main_loss_meter.val:.4f} '
+ 'Corr Loss {corr_loss_meter.val:.4f} '
+ 'Accuracy {accuracy:.4f}.'.format(epoch+1, args.epochs, i + 1, len(train_loader),
+ batch_time=batch_time, data_time=data_time,
+ remain_time=remain_time,
+ loss_meter=loss_meter,
+ main_loss_meter=main_loss_meter,
+ corr_loss_meter=corr_loss_meter,
+ accuracy=accuracy))
+
+ writer.add_scalar('loss_train_batch', loss_meter.val, current_iter)
+ writer.add_scalar('mIoU_train_batch', np.mean(intersection / (union + 1e-10)), current_iter)
+ writer.add_scalar('mAcc_train_batch', np.mean(intersection / (target + 1e-10)), current_iter)
+ writer.add_scalar('allAcc_train_batch', accuracy, current_iter)
+
+ iou_class = intersection_meter.sum / (union_meter.sum + 1e-10)
+ accuracy_class = intersection_meter.sum / (target_meter.sum + 1e-10)
+ mIoU = np.mean(iou_class)
+ mAcc = np.mean(accuracy_class)
+ allAcc = sum(intersection_meter.sum) / (sum(target_meter.sum) + 1e-10)
+ logger.info('Train result at epoch [{}/{}]: mIoU/mAcc/allAcc {:.4f}/{:.4f}/{:.4f}.'.format(epoch+1, args.epochs, mIoU, mAcc, allAcc))
+ return loss_meter.avg, mIoU, mAcc, allAcc
+
+
+def validate(val_loader, model, criterion):
+ logger.info('>>>>>>>>>>>>>>>> Start Evaluation >>>>>>>>>>>>>>>>')
+ batch_time = AverageMeter()
+ data_time = AverageMeter()
+ loss_meter = AverageMeter()
+ intersection_meter = AverageMeter()
+ union_meter = AverageMeter()
+ target_meter = AverageMeter()
+
+ model.eval()
+ end = time.time()
+ for i, (input, target) in enumerate(val_loader):
+ data_time.update(time.time() - end)
+ input = input.cuda(non_blocking=True)
+ target = target.cuda(non_blocking=True)
+ if target.shape[-1] == 1:
+ target = target[:, 0] # for cls
+ output = model(input)
+ loss = criterion(output, target)
+
+ output = output.max(1)[1]
+ intersection, union, target = intersectionAndUnionGPU(output, target, args.classes, args.ignore_label)
+ intersection, union, target = intersection.cpu().numpy(), union.cpu().numpy(), target.cpu().numpy()
+ intersection_meter.update(intersection), union_meter.update(union), target_meter.update(target)
+
+ accuracy = sum(intersection_meter.val) / (sum(target_meter.val) + 1e-10)
+ loss_meter.update(loss.item(), input.size(0))
+ batch_time.update(time.time() - end)
+ end = time.time()
+ if (i + 1) % args.print_freq == 0:
+ logger.info('Test: [{}/{}] '
+ 'Data {data_time.val:.3f} ({data_time.avg:.3f}) '
+ 'Batch {batch_time.val:.3f} ({batch_time.avg:.3f}) '
+ 'Loss {loss_meter.val:.4f} ({loss_meter.avg:.4f}) '
+ 'Accuracy {accuracy:.4f}.'.format(i + 1, len(val_loader),
+ data_time=data_time,
+ batch_time=batch_time,
+ loss_meter=loss_meter,
+ accuracy=accuracy))
+
+ iou_class = intersection_meter.sum / (union_meter.sum + 1e-10)
+ accuracy_class = intersection_meter.sum / (target_meter.sum + 1e-10)
+ mIoU = np.mean(iou_class)
+ mAcc = np.mean(accuracy_class)
+ allAcc = sum(intersection_meter.sum) / (sum(target_meter.sum) + 1e-10)
+
+ logger.info('Val result: mIoU/mAcc/allAcc {:.4f}/{:.4f}/{:.4f}.'.format(mIoU, mAcc, allAcc))
+ for i in range(args.classes):
+ logger.info('Class_{} Result: iou/accuracy {:.4f}/{:.4f}.'.format(i, iou_class[i], accuracy_class[i]))
+ logger.info('<<<<<<<<<<<<<<<<< End Evaluation <<<<<<<<<<<<<<<<<')
+ return loss_meter.avg, mIoU, mAcc, allAcc
+
+
+if __name__ == '__main__':
+ main()
diff --git a/zoo/PAConv/scene_seg/tool/train.sh b/zoo/PAConv/scene_seg/tool/train.sh
new file mode 100755
index 0000000..62e58a9
--- /dev/null
+++ b/zoo/PAConv/scene_seg/tool/train.sh
@@ -0,0 +1,21 @@
+#!/bin/sh
+export PYTHONPATH=./
+
+PYTHON=python
+dataset=$1
+exp_name=$2
+exp_dir=exp/${dataset}/${exp_name}
+model_dir=${exp_dir}/model
+config=config/${dataset}/${dataset}_${exp_name}.yaml
+
+mkdir -p ${model_dir}
+now=$(date +"%Y%m%d_%H%M%S")
+cp tool/train.sh tool/train.py ${config} ${exp_dir}
+
+$PYTHON tool/train.py --config=${config} 2>&1 | tee ${model_dir}/train-$now.log
+
+
+if [ ${dataset} = 's3dis' ]
+then
+ $PYTHON tool/test_s3dis.py --config=${config} 2>&1 | tee ${model_dir}/test-$now.log
+fi
diff --git a/zoo/PAConv/scene_seg/util/block.py b/zoo/PAConv/scene_seg/util/block.py
new file mode 100644
index 0000000..1eacfe5
--- /dev/null
+++ b/zoo/PAConv/scene_seg/util/block.py
@@ -0,0 +1,789 @@
+import shutil, os
+import tqdm
+from itertools import repeat
+import numpy as np
+from typing import List, Tuple
+
+import torch
+import torch.nn as nn
+from torch.autograd.function import InplaceFunction
+
+BN1d, BN2d, BN3d = nn.BatchNorm1d, nn.BatchNorm2d, nn.BatchNorm3d
+
+
+class SharedMLP(nn.Sequential):
+
+ def __init__(
+ self,
+ args: List[int],
+ *,
+ bn: bool = False,
+ activation=nn.ReLU(inplace=True),
+ preact: bool = False,
+ first: bool = False,
+ name: str = ""
+ ):
+ super().__init__()
+
+ for i in range(len(args) - 1):
+ self.add_module(
+ name + 'layer{}'.format(i),
+ Conv2d(
+ args[i],
+ args[i + 1],
+ bn=(not first or not preact or (i != 0)) and bn,
+ activation=activation
+ if (not first or not preact or (i != 0)) else None,
+ preact=preact
+ )
+ )
+
+
+class _BNBase(nn.Sequential):
+
+ def __init__(self, in_size, batch_norm=None, name=""):
+ super().__init__()
+ self.add_module(name + "bn", batch_norm(in_size))
+
+ nn.init.constant_(self[0].weight, 1.0)
+ nn.init.constant_(self[0].bias, 0)
+
+
+class BatchNorm1d(_BNBase):
+
+ def __init__(self, in_size: int, *, name: str = ""):
+ super().__init__(in_size, batch_norm=BN1d, name=name)
+
+
+class BatchNorm2d(_BNBase):
+
+ def __init__(self, in_size: int, name: str = ""):
+ super().__init__(in_size, batch_norm=BN2d, name=name)
+
+
+class BatchNorm3d(_BNBase):
+
+ def __init__(self, in_size: int, name: str = ""):
+ super().__init__(in_size, batch_norm=BN3d, name=name)
+
+
+class _ConvBase(nn.Sequential):
+
+ def __init__(
+ self,
+ in_size,
+ out_size,
+ kernel_size,
+ stride,
+ padding,
+ activation,
+ bn,
+ init,
+ conv=None,
+ batch_norm=None,
+ bias=True,
+ preact=False,
+ name=""
+ ):
+ super().__init__()
+
+ bias = bias and (not bn)
+ conv_unit = conv(
+ in_size,
+ out_size,
+ kernel_size=kernel_size,
+ stride=stride,
+ padding=padding,
+ bias=bias
+ )
+ init(conv_unit.weight)
+ if bias:
+ nn.init.constant_(conv_unit.bias, 0)
+
+ if bn:
+ if not preact:
+ bn_unit = batch_norm(out_size)
+ else:
+ bn_unit = batch_norm(in_size)
+
+ if preact:
+ if bn:
+ self.add_module(name + 'bn', bn_unit)
+
+ if activation is not None:
+ self.add_module(name + 'activation', activation)
+
+ self.add_module(name + 'conv', conv_unit)
+
+ if not preact:
+ if bn:
+ self.add_module(name + 'bn', bn_unit)
+
+ if activation is not None:
+ self.add_module(name + 'activation', activation)
+
+
+class Conv1d(_ConvBase):
+
+ def __init__(
+ self,
+ in_size: int,
+ out_size: int,
+ *,
+ kernel_size: int = 1,
+ stride: int = 1,
+ padding: int = 0,
+ activation=nn.ReLU(inplace=True),
+ bn: bool = False,
+ init=nn.init.kaiming_normal_,
+ bias: bool = True,
+ preact: bool = False,
+ name: str = ""
+ ):
+ super().__init__(
+ in_size,
+ out_size,
+ kernel_size,
+ stride,
+ padding,
+ activation,
+ bn,
+ init,
+ conv=nn.Conv1d,
+ batch_norm=BatchNorm1d,
+ bias=bias,
+ preact=preact,
+ name=name
+ )
+
+
+class Conv2d(_ConvBase):
+
+ def __init__(
+ self,
+ in_size: int,
+ out_size: int,
+ *,
+ kernel_size: Tuple[int, int] = (1, 1),
+ stride: Tuple[int, int] = (1, 1),
+ padding: Tuple[int, int] = (0, 0),
+ activation=nn.ReLU(inplace=True),
+ bn: bool = False,
+ init=nn.init.kaiming_normal_,
+ bias: bool = True,
+ preact: bool = False,
+ name: str = ""
+ ):
+ super().__init__(
+ in_size,
+ out_size,
+ kernel_size,
+ stride,
+ padding,
+ activation,
+ bn,
+ init,
+ conv=nn.Conv2d,
+ batch_norm=BatchNorm2d,
+ bias=bias,
+ preact=preact,
+ name=name
+ )
+
+
+class Conv3d(_ConvBase):
+
+ def __init__(
+ self,
+ in_size: int,
+ out_size: int,
+ *,
+ kernel_size: Tuple[int, int, int] = (1, 1, 1),
+ stride: Tuple[int, int, int] = (1, 1, 1),
+ padding: Tuple[int, int, int] = (0, 0, 0),
+ activation=nn.ReLU(inplace=True),
+ bn: bool = False,
+ init=nn.init.kaiming_normal_,
+ bias: bool = True,
+ preact: bool = False,
+ name: str = ""
+ ):
+ super().__init__(
+ in_size,
+ out_size,
+ kernel_size,
+ stride,
+ padding,
+ activation,
+ bn,
+ init,
+ conv=nn.Conv3d,
+ batch_norm=BatchNorm3d,
+ bias=bias,
+ preact=preact,
+ name=name
+ )
+
+
+class FC(nn.Sequential):
+
+ def __init__(
+ self,
+ in_size: int,
+ out_size: int,
+ *,
+ activation=nn.ReLU(inplace=True),
+ bn: bool = False,
+ init=None,
+ preact: bool = False,
+ name: str = ""
+ ):
+ super().__init__()
+
+ fc = nn.Linear(in_size, out_size, bias=not bn)
+ if init is not None:
+ init(fc.weight)
+ if not bn:
+ nn.init.constant_(fc.bias, 0)
+
+ if preact:
+ if bn:
+ self.add_module(name + 'bn', BatchNorm1d(in_size))
+
+ if activation is not None:
+ self.add_module(name + 'activation', activation)
+
+ self.add_module(name + 'fc', fc)
+
+ if not preact:
+ if bn:
+ self.add_module(name + 'bn', BatchNorm1d(out_size))
+
+ if activation is not None:
+ self.add_module(name + 'activation', activation)
+
+
+class _DropoutNoScaling(InplaceFunction):
+
+ @staticmethod
+ def _make_noise(input):
+ return input.new().resize_as_(input)
+
+ @staticmethod
+ def symbolic(g, input, p=0.5, train=False, inplace=False):
+ if inplace:
+ return None
+ n = g.appendNode(
+ g.create("Dropout", [input]).f_("ratio",
+ p).i_("is_test", not train)
+ )
+ real = g.appendNode(g.createSelect(n, 0))
+ g.appendNode(g.createSelect(n, 1))
+ return real
+
+ @classmethod
+ def forward(cls, ctx, input, p=0.5, train=False, inplace=False):
+ if p < 0 or p > 1:
+ raise ValueError(
+ "dropout probability has to be between 0 and 1, "
+ "but got {}".format(p)
+ )
+ ctx.p = p
+ ctx.train = train
+ ctx.inplace = inplace
+
+ if ctx.inplace:
+ ctx.mark_dirty(input)
+ output = input
+ else:
+ output = input.clone()
+
+ if ctx.p > 0 and ctx.train:
+ ctx.noise = cls._make_noise(input)
+ if ctx.p == 1:
+ ctx.noise.fill_(0)
+ else:
+ ctx.noise.bernoulli_(1 - ctx.p)
+ ctx.noise = ctx.noise.expand_as(input)
+ output.mul_(ctx.noise)
+
+ return output
+
+ @staticmethod
+ def backward(ctx, grad_output):
+ if ctx.p > 0 and ctx.train:
+ return grad_output.mul(ctx.noise), None, None, None
+ else:
+ return grad_output, None, None, None
+
+
+dropout_no_scaling = _DropoutNoScaling.apply
+
+
+class _FeatureDropoutNoScaling(_DropoutNoScaling):
+
+ @staticmethod
+ def symbolic(input, p=0.5, train=False, inplace=False):
+ return None
+
+ @staticmethod
+ def _make_noise(input):
+ return input.new().resize_(
+ input.size(0), input.size(1), *repeat(1,
+ input.dim() - 2)
+ )
+
+
+feature_dropout_no_scaling = _FeatureDropoutNoScaling.apply
+
+
+def group_model_params(model: nn.Module, **kwargs):
+ decay_group = []
+ no_decay_group = []
+
+ for name, param in model.named_parameters():
+ if name.find("bn") != -1 or name.find("bias") != -1:
+ no_decay_group.append(param)
+ else:
+ decay_group.append(param)
+
+ assert len(list(model.parameters())) == len(decay_group) + len(no_decay_group)
+
+ return [
+ dict(params=decay_group, **kwargs),
+ dict(params=no_decay_group, weight_decay=0.0, **kwargs)
+ ]
+
+
+def checkpoint_state(
+ model=None, optimizer=None, best_prec=None, epoch=None, it=None
+):
+ optim_state = optimizer.state_dict() if optimizer is not None else None
+ if model is not None:
+ if isinstance(model, torch.nn.DataParallel):
+ model_state = model.module.state_dict()
+ else:
+ model_state = model.state_dict()
+ else:
+ model_state = None
+
+ return {
+ 'epoch': epoch,
+ 'it': it,
+ 'best_prec': best_prec,
+ 'model_state': model_state,
+ 'optimizer_state': optim_state
+ }
+
+
+def save_checkpoint(
+ state, is_best, filename='checkpoint', bestname='model_best'
+):
+ filename = '{}.pth.tar'.format(filename)
+ torch.save(state, filename)
+ if is_best:
+ shutil.copyfile(filename, '{}.pth.tar'.format(bestname))
+
+
+def load_checkpoint(model=None, optimizer=None, filename='checkpoint'):
+ filename = "{}.pth.tar".format(filename)
+ if os.path.isfile(filename):
+ print("==> Loading from checkpoint '{}'".format(filename))
+ checkpoint = torch.load(filename)
+ epoch = checkpoint['epoch']
+ it = checkpoint.get('it', 0.0)
+ best_prec = checkpoint['best_prec']
+ if model is not None and checkpoint['model_state'] is not None:
+ model.load_state_dict(checkpoint['model_state'])
+ if optimizer is not None and checkpoint['optimizer_state'] is not None:
+ optimizer.load_state_dict(checkpoint['optimizer_state'])
+ print("==> Done")
+ else:
+ print("==> Checkpoint '{}' not found".format(filename))
+
+ return it, epoch, best_prec
+
+
+def variable_size_collate(pad_val=0, use_shared_memory=True):
+ import collections
+ _numpy_type_map = {
+ 'float64': torch.DoubleTensor,
+ 'float32': torch.FloatTensor,
+ 'float16': torch.HalfTensor,
+ 'int64': torch.LongTensor,
+ 'int32': torch.IntTensor,
+ 'int16': torch.ShortTensor,
+ 'int8': torch.CharTensor,
+ 'uint8': torch.ByteTensor,
+ }
+
+ def wrapped(batch):
+ "Puts each data field into a tensor with outer dimension batch size"
+
+ error_msg = "batch must contain tensors, numbers, dicts or lists; found {}"
+ elem_type = type(batch[0])
+ if torch.is_tensor(batch[0]):
+ max_len = 0
+ for b in batch:
+ max_len = max(max_len, b.size(0))
+
+ numel = sum([int(b.numel() / b.size(0) * max_len) for b in batch])
+ if use_shared_memory:
+ # If we're in a background process, concatenate directly into a
+ # shared memory tensor to avoid an extra copy
+ storage = batch[0].storage()._new_shared(numel)
+ out = batch[0].new(storage)
+ else:
+ out = batch[0].new(numel)
+
+ out = out.view(
+ len(batch), max_len,
+ *[batch[0].size(i) for i in range(1, batch[0].dim())]
+ )
+ out.fill_(pad_val)
+ for i in range(len(batch)):
+ out[i, 0:batch[i].size(0)] = batch[i]
+
+ return out
+ elif elem_type.__module__ == 'numpy' and elem_type.__name__ != 'str_' \
+ and elem_type.__name__ != 'string_':
+ elem = batch[0]
+ if elem_type.__name__ == 'ndarray':
+ # array of string classes and object
+ if re.search('[SaUO]', elem.dtype.str) is not None:
+ raise TypeError(error_msg.format(elem.dtype))
+
+ return wrapped([torch.from_numpy(b) for b in batch])
+ if elem.shape == (): # scalars
+ py_type = float if elem.dtype.name.startswith('float') else int
+ return _numpy_type_map[elem.dtype.name](
+ list(map(py_type, batch))
+ )
+ elif isinstance(batch[0], int):
+ return torch.LongTensor(batch)
+ elif isinstance(batch[0], float):
+ return torch.DoubleTensor(batch)
+ elif isinstance(batch[0], collections.Mapping):
+ return {key: wrapped([d[key] for d in batch]) for key in batch[0]}
+ elif isinstance(batch[0], collections.Sequence):
+ transposed = zip(*batch)
+ return [wrapped(samples) for samples in transposed]
+
+ raise TypeError((error_msg.format(type(batch[0]))))
+
+ return wrapped
+
+
+class TrainValSplitter():
+ r"""
+ Creates a training and validation split to be used as the sampler in a pytorch DataLoader
+ Parameters
+ ---------
+ numel : int
+ Number of elements in the entire training dataset
+ percent_train : float
+ Percentage of data in the training split
+ shuffled : bool
+ Whether or not shuffle which data goes to which split
+ """
+
+ def __init__(
+ self, *, numel: int, percent_train: float, shuffled: bool = False
+ ):
+ indicies = np.array([i for i in range(numel)])
+ if shuffled:
+ np.random.shuffle(indicies)
+
+ self.train = torch.utils.data.sampler.SubsetRandomSampler(
+ indicies[0:int(percent_train * numel)]
+ )
+ self.val = torch.utils.data.sampler.SubsetRandomSampler(
+ indicies[int(percent_train * numel):-1]
+ )
+
+
+'''
+class CrossValSplitter():
+ r"""
+ Class that creates cross validation splits. The train and val splits can be used in pytorch DataLoaders. The splits can be updated
+ by calling next(self) or using a loop:
+ for _ in self:
+ ....
+ Parameters
+ ---------
+ numel : int
+ Number of elements in the training set
+ k_folds : int
+ Number of folds
+ shuffled : bool
+ Whether or not to shuffle which data goes in which fold
+ """
+
+ def __init__(self, *, numel: int, k_folds: int, shuffled: bool = False):
+ inidicies = np.array([i for i in range(numel)])
+ if shuffled:
+ np.random.shuffle(inidicies)
+
+ self.folds = np.array(np.array_split(inidicies, k_folds), dtype=object)
+ self.current_v_ind = -1
+
+ self.val = torch.utils.data.sampler.SubsetRandomSampler(self.folds[0])
+ self.train = torch.utils.data.sampler.SubsetRandomSampler(
+ np.concatenate(self.folds[1:], axis=0)
+ )
+
+ self.metrics = {}
+
+ def __iter__(self):
+ self.current_v_ind = -1
+ return self
+
+ def __len__(self):
+ return len(self.folds)
+
+ def __getitem__(self, idx):
+ assert idx >= 0 and idx < len(self)
+ self.val.inidicies = self.folds[idx]
+ self.train.inidicies = np.concatenate(
+ self.folds[np.arange(len(self)) != idx], axis=0
+ )
+
+ def __next__(self):
+ self.current_v_ind += 1
+ if self.current_v_ind >= len(self):
+ raise StopIteration
+
+ self[self.current_v_ind]
+
+ def update_metrics(self, to_post: dict):
+ for k, v in to_post.items():
+ if k in self.metrics:
+ self.metrics[k].append(v)
+ else:
+ self.metrics[k] = [v]
+
+ def print_metrics(self):
+ for name, samples in self.metrics.items():
+ xbar = stats.mean(samples)
+ sx = stats.stdev(samples, xbar)
+ tstar = student_t.ppf(1.0 - 0.025, len(samples) - 1)
+ margin_of_error = tstar * sx / sqrt(len(samples))
+ print("{}: {} +/- {}".format(name, xbar, margin_of_error))
+'''
+
+
+def set_bn_momentum_default(bn_momentum):
+
+ def fn(m):
+ if isinstance(m, (nn.BatchNorm1d, nn.BatchNorm2d, nn.BatchNorm3d)):
+ m.momentum = bn_momentum
+
+ return fn
+
+
+class BNMomentumScheduler(object):
+
+ def __init__(
+ self, model, bn_lambda, last_epoch=-1,
+ setter=set_bn_momentum_default
+ ):
+ if not isinstance(model, nn.Module):
+ raise RuntimeError(
+ "Class '{}' is not a PyTorch nn Module".format(
+ type(model).__name__
+ )
+ )
+
+ self.model = model
+ self.setter = setter
+ self.lmbd = bn_lambda
+
+ self.step(last_epoch + 1)
+ self.last_epoch = last_epoch
+
+ def step(self, epoch=None):
+ if epoch is None:
+ epoch = self.last_epoch + 1
+
+ self.last_epoch = epoch
+ self.model.apply(self.setter(self.lmbd(epoch)))
+
+
+class Trainer(object):
+ r"""
+ Reasonably generic trainer for pytorch models
+
+ Parameters
+ ----------
+ model : pytorch model
+ Model to be trained
+ model_fn : function (model, inputs, labels) -> preds, loss, accuracy
+ optimizer : torch.optim
+ Optimizer for model
+ checkpoint_name : str
+ Name of file to save checkpoints to
+ best_name : str
+ Name of file to save best model to
+ lr_scheduler : torch.optim.lr_scheduler
+ Learning rate scheduler. .step() will be called at the start of every epoch
+ bnm_scheduler : BNMomentumScheduler
+ Batchnorm momentum scheduler. .step() will be called at the start of every epoch
+ eval_frequency : int
+ How often to run an eval
+ log_name : str
+ Name of file to output tensorboard_logger to
+ """
+
+ def __init__(
+ self,
+ model,
+ model_fn,
+ optimizer,
+ checkpoint_name="ckpt",
+ best_name="best",
+ lr_scheduler=None,
+ bnm_scheduler=None,
+ eval_frequency=-1,
+ viz=None
+ ):
+ self.model, self.model_fn, self.optimizer, self.lr_scheduler, self.bnm_scheduler = (
+ model, model_fn, optimizer, lr_scheduler, bnm_scheduler
+ )
+
+ self.checkpoint_name, self.best_name = checkpoint_name, best_name
+ self.eval_frequency = eval_frequency
+
+ self.training_best, self.eval_best = {}, {}
+ self.viz = viz
+
+ @staticmethod
+ def _decode_value(v):
+ if isinstance(v[0], float):
+ return np.mean(v)
+ elif isinstance(v[0], tuple):
+ if len(v[0]) == 3:
+ num = [l[0] for l in v]
+ denom = [l[1] for l in v]
+ w = v[0][2]
+ else:
+ num = [l[0] for l in v]
+ denom = [l[1] for l in v]
+ w = None
+
+ return np.average(
+ np.sum(num, axis=0) / (np.sum(denom, axis=0) + 1e-6), weights=w
+ )
+ else:
+ raise AssertionError("Unknown type: {}".format(type(v)))
+
+ def _train_it(self, it, batch):
+ self.model.train()
+
+ if self.lr_scheduler is not None:
+ self.lr_scheduler.step(it)
+
+ if self.bnm_scheduler is not None:
+ self.bnm_scheduler.step(it)
+
+ self.optimizer.zero_grad()
+ _, loss, eval_res = self.model_fn(self.model, batch)
+
+ loss.backward()
+ self.optimizer.step()
+
+ return eval_res
+
+ def eval_epoch(self, d_loader):
+ self.model.eval()
+
+ eval_dict = {}
+ total_loss = 0.0
+ count = 1.0
+ for i, data in tqdm.tqdm(enumerate(d_loader, 0), total=len(d_loader),
+ leave=False, desc='val'):
+ self.optimizer.zero_grad()
+
+ _, loss, eval_res = self.model_fn(self.model, data, eval=True)
+
+ total_loss += loss.item()
+ count += 1
+ for k, v in eval_res.items():
+ if v is not None:
+ eval_dict[k] = eval_dict.get(k, []) + [v]
+
+ return total_loss / count, eval_dict
+
+ def train(
+ self,
+ start_it,
+ start_epoch,
+ n_epochs,
+ train_loader,
+ test_loader=None,
+ best_loss=0.0
+ ):
+ r"""
+ Call to begin training the model
+
+ Parameters
+ ----------
+ start_epoch : int
+ Epoch to start at
+ n_epochs : int
+ Number of epochs to train for
+ test_loader : torch.utils.data.DataLoader
+ DataLoader of the test_data
+ train_loader : torch.utils.data.DataLoader
+ DataLoader of training data
+ best_loss : float
+ Testing loss of the best model
+ """
+
+ eval_frequency = (
+ self.eval_frequency
+ if self.eval_frequency > 0 else len(train_loader)
+ )
+
+ it = start_it
+ with tqdm.trange(start_epoch, n_epochs + 1, desc='epochs') as tbar, \
+ tqdm.tqdm(total=eval_frequency, leave=False, desc='train') as pbar:
+
+ for epoch in tbar:
+ for batch in train_loader:
+ res = self._train_it(it, batch)
+ it += 1
+
+ pbar.update()
+ pbar.set_postfix(dict(total_it=it))
+ tbar.refresh()
+
+ if self.viz is not None:
+ self.viz.update('train', it, res)
+
+ if (it % eval_frequency) == 0:
+ pbar.close()
+
+ if test_loader is not None:
+ val_loss, res = self.eval_epoch(test_loader)
+
+ if self.viz is not None:
+ self.viz.update('val', it, res)
+
+ is_best = val_loss < best_loss
+ best_loss = min(best_loss, val_loss)
+ save_checkpoint(
+ checkpoint_state(
+ self.model, self.optimizer, val_loss, epoch,
+ it
+ ),
+ is_best,
+ filename=self.checkpoint_name,
+ bestname=self.best_name
+ )
+
+ pbar = tqdm.tqdm(
+ total=eval_frequency, leave=False, desc='train'
+ )
+ pbar.set_postfix(dict(total_it=it))
+
+ return best_loss
diff --git a/zoo/PAConv/scene_seg/util/config.py b/zoo/PAConv/scene_seg/util/config.py
new file mode 100755
index 0000000..1026fb2
--- /dev/null
+++ b/zoo/PAConv/scene_seg/util/config.py
@@ -0,0 +1,165 @@
+# -----------------------------------------------------------------------------
+# Functions for parsing args
+# -----------------------------------------------------------------------------
+import yaml
+import os
+from ast import literal_eval
+import copy
+
+
+class CfgNode(dict):
+ """
+ CfgNode represents an internal node in the configuration tree. It's a simple
+ dict-like container that allows for attribute-based access to keys.
+ """
+
+ def __init__(self, init_dict=None, key_list=None, new_allowed=False):
+ # Recursively convert nested dictionaries in init_dict into CfgNodes
+ init_dict = {} if init_dict is None else init_dict
+ key_list = [] if key_list is None else key_list
+ for k, v in init_dict.items():
+ if type(v) is dict:
+ # Convert dict to CfgNode
+ init_dict[k] = CfgNode(v, key_list=key_list + [k])
+ super(CfgNode, self).__init__(init_dict)
+
+ def __getattr__(self, name):
+ if name in self:
+ return self[name]
+ else:
+ raise AttributeError(name)
+
+ def __setattr__(self, name, value):
+ self[name] = value
+
+ def __str__(self):
+ def _indent(s_, num_spaces):
+ s = s_.split("\n")
+ if len(s) == 1:
+ return s_
+ first = s.pop(0)
+ s = [(num_spaces * " ") + line for line in s]
+ s = "\n".join(s)
+ s = first + "\n" + s
+ return s
+
+ r = ""
+ s = []
+ for k, v in sorted(self.items()):
+ seperator = "\n" if isinstance(v, CfgNode) else " "
+ attr_str = "{}:{}{}".format(str(k), seperator, str(v))
+ attr_str = _indent(attr_str, 2)
+ s.append(attr_str)
+ r += "\n".join(s)
+ return r
+
+ def __repr__(self):
+ return "{}({})".format(self.__class__.__name__, super(CfgNode, self).__repr__())
+
+
+def load_cfg_from_cfg_file(file):
+ cfg = {}
+ assert os.path.isfile(file) and file.endswith('.yaml'), \
+ '{} is not a yaml file'.format(file)
+
+ with open(file, 'r') as f:
+ cfg_from_file = yaml.safe_load(f)
+
+ for key in cfg_from_file:
+ for k, v in cfg_from_file[key].items():
+ cfg[k] = v
+
+ cfg = CfgNode(cfg)
+ return cfg
+
+
+def merge_cfg_from_list(cfg, cfg_list):
+ new_cfg = copy.deepcopy(cfg)
+ assert len(cfg_list) % 2 == 0
+ for full_key, v in zip(cfg_list[0::2], cfg_list[1::2]):
+ subkey = full_key.split('.')[-1]
+ assert subkey in cfg, 'Non-existent key: {}'.format(full_key)
+ value = _decode_cfg_value(v)
+ value = _check_and_coerce_cfg_value_type(
+ value, cfg[subkey], subkey, full_key
+ )
+ setattr(new_cfg, subkey, value)
+
+ return new_cfg
+
+
+def _decode_cfg_value(v):
+ """Decodes a raw config value (e.g., from a yaml config files or command
+ line argument) into a Python object.
+ """
+ # All remaining processing is only applied to strings
+ if not isinstance(v, str):
+ return v
+ # Try to interpret `v` as a:
+ # string, number, tuple, list, dict, boolean, or None
+ try:
+ v = literal_eval(v)
+ # The following two excepts allow v to pass through when it represents a
+ # string.
+ #
+ # Longer explanation:
+ # The type of v is always a string (before calling literal_eval), but
+ # sometimes it *represents* a string and other times a data structure, like
+ # a list. In the case that v represents a string, what we got back from the
+ # yaml parser is 'foo' *without quotes* (so, not '"foo"'). literal_eval is
+ # ok with '"foo"', but will raise a ValueError if given 'foo'. In other
+ # cases, like paths (v = 'foo/bar' and not v = '"foo/bar"'), literal_eval
+ # will raise a SyntaxError.
+ except ValueError:
+ pass
+ except SyntaxError:
+ pass
+ return v
+
+
+def _check_and_coerce_cfg_value_type(replacement, original, key, full_key):
+ """Checks that `replacement`, which is intended to replace `original` is of
+ the right type. The type is correct if it matches exactly or is one of a few
+ cases in which the type can be easily coerced.
+ """
+ original_type = type(original)
+ replacement_type = type(replacement)
+
+ # The types must match (with some exceptions)
+ if replacement_type == original_type:
+ return replacement
+
+ # Cast replacement from from_type to to_type if the replacement and original
+ # types match from_type and to_type
+ def conditional_cast(from_type, to_type):
+ if replacement_type == from_type and original_type == to_type:
+ return True, to_type(replacement)
+ else:
+ return False, None
+
+ # Conditionally casts
+ # list <-> tuple
+ casts = [(tuple, list), (list, tuple)]
+ # For py2: allow converting from str (bytes) to a unicode string
+ try:
+ casts.append((str, unicode)) # noqa: F821
+ except Exception:
+ pass
+
+ for (from_type, to_type) in casts:
+ converted, converted_value = conditional_cast(from_type, to_type)
+ if converted:
+ return converted_value
+
+ raise ValueError(
+ "Type mismatch ({} vs. {}) with values ({} vs. {}) for config "
+ "key: {}".format(
+ original_type, replacement_type, original, replacement, full_key
+ )
+ )
+
+
+def _assert_with_logging(cond, msg):
+ if not cond:
+ logger.debug(msg)
+ assert cond, msg
diff --git a/zoo/PAConv/scene_seg/util/dataset.py b/zoo/PAConv/scene_seg/util/dataset.py
new file mode 100755
index 0000000..e8086e5
--- /dev/null
+++ b/zoo/PAConv/scene_seg/util/dataset.py
@@ -0,0 +1,74 @@
+import os
+import h5py
+import numpy as np
+import torch
+
+from torch.utils.data import Dataset
+
+
+def make_dataset(split='train', data_root=None, data_list=None):
+ if not os.path.isfile(data_list):
+ raise (RuntimeError("Point list file do not exist: " + data_list + "\n"))
+ point_list = []
+ list_read = open(data_list).readlines()
+ print("Totally {} samples in {} set.".format(len(list_read), split))
+ for line in list_read:
+ point_list.append(os.path.join(data_root, line.strip()))
+ return point_list
+
+
+class PointData(Dataset):
+ def __init__(self, split='train', data_root=None, data_list=None, transform=None,
+ num_point=None, random_index=False, norm_as_feat=True, fea_dim=6):
+ assert split in ['train', 'val', 'test']
+ self.split = split
+ self.data_list = make_dataset(split, data_root, data_list)
+ self.transform = transform
+ self.num_point = num_point
+ self.random_index = random_index
+ self.norm_as_feat = norm_as_feat
+ self.fea_dim = fea_dim
+
+ def __len__(self):
+ return len(self.data_list)
+
+ def __getitem__(self, index):
+ data_path = self.data_list[index]
+ f = h5py.File(data_path, 'r')
+ data = f['data'][:]
+ if self.split is 'test':
+ label = 255 # place holder
+ else:
+ label = f['label'][:]
+ f.close()
+ if self.num_point is None:
+ self.num_point = data.shape[0]
+ idxs = np.arange(data.shape[0])
+ if self.random_index:
+ np.random.shuffle(idxs)
+ idxs = idxs[0: self.num_point]
+ data = data[idxs, :]
+ if label.size != 1: # seg data
+ label = label[idxs]
+ if self.transform is not None:
+ data, label = self.transform(data, label)
+
+ if self.fea_dim == 3:
+ points = data[:, :6]
+ elif self.fea_dim == 4:
+ points = np.concatenate((data[:, :6], data[:, 2:3]), axis=-1)
+ elif self.fea_dim == 5:
+ points = np.concatenate((data[:, :6], data[:, 2:3], torch.ones((self.num_point, 1)).to(data.device)), axis=-1)
+ elif self.fea_dim == 6:
+ points = data
+
+ return points, label
+
+
+if __name__ == '__main__':
+ data_root = '/mnt/sda1/hszhao/dataset/3d/s3dis'
+ data_list = '/mnt/sda1/hszhao/dataset/3d/s3dis/list/train12346.txt'
+ point_data = PointData('train', data_root, data_list)
+ print('point data size:', point_data.__len__())
+ print('point data 0 shape:', point_data.__getitem__(0)[0].shape)
+ print('point label 0 shape:', point_data.__getitem__(0)[1].shape)
diff --git a/zoo/PAConv/scene_seg/util/paconv_util.py b/zoo/PAConv/scene_seg/util/paconv_util.py
new file mode 100755
index 0000000..e05d81b
--- /dev/null
+++ b/zoo/PAConv/scene_seg/util/paconv_util.py
@@ -0,0 +1,73 @@
+import torch
+
+
+def weight_init(m):
+ # print(m)
+ if isinstance(m, torch.nn.Linear):
+ torch.nn.init.xavier_normal_(m.weight)
+ if m.bias is not None:
+ torch.nn.init.constant_(m.bias, 0)
+ elif isinstance(m, torch.nn.Conv2d):
+ torch.nn.init.xavier_normal_(m.weight)
+ if m.bias is not None:
+ torch.nn.init.constant_(m.bias, 0)
+ elif isinstance(m, torch.nn.Conv1d):
+ torch.nn.init.xavier_normal_(m.weight)
+ if m.bias is not None:
+ torch.nn.init.constant_(m.bias, 0)
+ elif isinstance(m, torch.nn.BatchNorm2d):
+ torch.nn.init.constant_(m.weight, 1)
+ torch.nn.init.constant_(m.bias, 0)
+ elif isinstance(m, torch.nn.BatchNorm1d):
+ torch.nn.init.constant_(m.weight, 1)
+ torch.nn.init.constant_(m.bias, 0)
+
+
+def get_graph_feature(x, k, idx):
+ batch_size = x.size(0)
+ num_points = x.size(2)
+ x = x.view(batch_size, -1, num_points)
+
+ idx_base = torch.arange(0, batch_size, device=x.device).view(-1, 1, 1) * num_points
+
+ idx = idx + idx_base
+
+ idx = idx.view(-1)
+
+ _, num_dims, _ = x.size()
+
+ x = x.transpose(2, 1).contiguous()
+
+ neighbor = x.view(batch_size * num_points, -1)[idx, :]
+
+ neighbor = neighbor.view(batch_size, num_points, k, num_dims)
+
+ x = x.view(batch_size, num_points, 1, num_dims).repeat(1, 1, k, 1)
+
+ feature = torch.cat((neighbor - x, neighbor), dim=3) # (xj-xi, xj): b,n,k,2c
+
+ return feature
+
+
+def assign_score(score, point_input):
+ B, N, K, m = score.size()
+ score = score.view(B, N, K, 1, m)
+ point_output = torch.matmul(score, point_input).view(B, N, K, -1) # b,n,k,cout
+ return point_output
+
+
+def get_ed(x, y):
+ ed = torch.norm(x - y, dim=-1).reshape(x.shape[0], 1)
+ return ed
+
+
+def assign_kernel_withoutk(in_feat, kernel, M):
+ B, Cin, N0 = in_feat.size()
+ in_feat_trans = in_feat.permute(0, 2, 1)
+ out_feat_half1 = torch.matmul(in_feat_trans, kernel[:Cin]).view(B, N0, M, -1) # b,n,m,o1
+ out_feat_half2 = torch.matmul(in_feat_trans, kernel[Cin:]).view(B, N0, M, -1) # b,n,m,o1
+ if in_feat.size(1) % 2 != 0:
+ out_feat_half_coord = torch.matmul(in_feat_trans[:, :, :3], kernel[Cin: Cin + 3]).view(B, N0, M, -1) # b,n,m,o1
+ else:
+ out_feat_half_coord = torch.zeros_like(out_feat_half2)
+ return out_feat_half1 + out_feat_half2, out_feat_half1 + out_feat_half_coord
\ No newline at end of file
diff --git a/zoo/PAConv/scene_seg/util/s3dis.py b/zoo/PAConv/scene_seg/util/s3dis.py
new file mode 100755
index 0000000..a5b45ce
--- /dev/null
+++ b/zoo/PAConv/scene_seg/util/s3dis.py
@@ -0,0 +1,131 @@
+import os
+import numpy as np
+
+import torch
+from torch.utils.data import Dataset
+
+class S3DIS(Dataset):
+ def __init__(self, split='train', data_root='trainval_fullarea', num_point=4096, test_area=5,
+ block_size=1.0, sample_rate=1.0, transform=None, fea_dim=6, shuffle_idx=False):
+
+ super().__init__()
+ self.num_point = num_point
+ self.block_size = block_size
+ self.transform = transform
+ self.fea_dim = fea_dim
+ self.shuffle_idx = shuffle_idx
+ rooms = sorted(os.listdir(data_root))
+ rooms = [room for room in rooms if 'Area_' in room]
+ if split == 'train':
+ rooms_split = [room for room in rooms if not 'Area_{}'.format(test_area) in room]
+ else:
+ rooms_split = [room for room in rooms if 'Area_{}'.format(test_area) in room]
+ self.room_points, self.room_labels = [], []
+ self.room_coord_min, self.room_coord_max = [], []
+ num_point_all = []
+ for room_name in rooms_split:
+ room_path = os.path.join(data_root, room_name)
+ room_data = np.load(room_path) # xyzrgbl, N*7
+ points, labels = room_data[:, 0:6], room_data[:, 6] # xyzrgb, N*6; l, N
+ coord_min, coord_max = np.amin(points, axis=0)[:3], np.amax(points, axis=0)[:3]
+ self.room_points.append(points), self.room_labels.append(labels)
+ self.room_coord_min.append(coord_min), self.room_coord_max.append(coord_max)
+ num_point_all.append(labels.size)
+ sample_prob = num_point_all / np.sum(num_point_all)
+ num_iter = int(np.sum(num_point_all) * sample_rate / num_point)
+ room_idxs = []
+ for index in range(len(rooms_split)):
+ room_idxs.extend([index] * int(round(sample_prob[index] * num_iter)))
+ self.room_idxs = np.array(room_idxs)
+ print("Totally {} samples in {} set.".format(len(self.room_idxs), split))
+
+ def __getitem__(self, idx):
+ room_idx = self.room_idxs[idx]
+ points = self.room_points[room_idx] # N * 6
+ labels = self.room_labels[room_idx] # N
+ N_points = points.shape[0]
+
+ while (True):
+ # to select center points that at least 1024 points are covered in a block size 1m*1m
+ center = points[np.random.choice(N_points)][:3]
+ block_min = center - [self.block_size / 2.0, self.block_size / 2.0, 0]
+ block_max = center + [self.block_size / 2.0, self.block_size / 2.0, 0]
+ point_idxs = np.where((points[:, 0] >= block_min[0]) & (points[:, 0] <= block_max[0]) & (points[:, 1] >= block_min[1]) & (points[:, 1] <= block_max[1]))[0]
+ if point_idxs.size > self.num_point / 4:
+ break
+
+ if point_idxs.size >= self.num_point:
+ selected_point_idxs = np.random.choice(point_idxs, self.num_point, replace=False)
+ else:
+ # do not use random choice here to avoid some pts not counted
+ dup = np.random.choice(point_idxs.size, self.num_point - point_idxs.size)
+ idx_dup = np.concatenate([np.arange(point_idxs.size), np.array(dup)], 0)
+ selected_point_idxs = point_idxs[idx_dup]
+
+ selected_points = points[selected_point_idxs, :] # num_point * 6
+ # centered points
+ centered_points = np.zeros((self.num_point, 3))
+ centered_points[:, :2] = selected_points[:, :2] - center[:2]
+ centered_points[:, 2] = selected_points[:, 2]
+ # normalized colors
+ normalized_colors = selected_points[:, 3:6] / 255.0
+ # normalized points
+ normalized_points = selected_points[:, :3] / self.room_coord_max[room_idx]
+
+ # transformation for centered points and normalized colors
+ if self.transform is not None:
+ centered_points, normalized_colors = self.transform(centered_points, normalized_colors)
+
+ # current points and current labels
+ if self.fea_dim == 3:
+ current_points = np.concatenate((centered_points, normalized_points), axis=-1)
+ elif self.fea_dim == 6:
+ current_points = np.concatenate((centered_points, normalized_colors, normalized_points), axis=-1)
+ else:
+ raise ValueError('Feature dim {} not supported.'.format(self.fea_dim))
+ current_labels = labels[selected_point_idxs]
+
+ if self.shuffle_idx:
+ shuffle_idx = np.random.permutation(np.arange(current_points.shape[0]))
+ current_points, current_labels = current_points[shuffle_idx], current_labels[shuffle_idx]
+
+ # to Tensor
+ current_points = torch.FloatTensor(current_points)
+ current_labels = torch.LongTensor(current_labels)
+
+ return current_points, current_labels
+
+ def __len__(self):
+ return len(self.room_idxs)
+
+
+if __name__ == '__main__':
+ import transform
+ data_root = 'dataset/s3dis/trainval_fullarea'
+ num_point, test_area, block_size, sample_rate = 4096, 5, 1.0, 0.01
+
+ train_transform = transform.Compose([transform.RandomRotate(along_z=True),
+ transform.RandomScale(scale_low=0.8,
+ scale_high=1.2),
+ transform.RandomJitter(sigma=0.01,
+ clip=0.05),
+ transform.RandomDropColor(p=0.8, color_augment=0.0)])
+ point_data = S3DIS(split='train', data_root=data_root, num_point=num_point, test_area=test_area, block_size=block_size, sample_rate=sample_rate, transform=train_transform)
+ print('point data size:', point_data.__len__())
+
+ print('point data 0 shape:', point_data.__getitem__(0)[0].shape)
+ print('point label 0 shape:', point_data.__getitem__(0)[1].shape)
+ import torch, time, random
+ manual_seed = 123
+ random.seed(manual_seed)
+ np.random.seed(manual_seed)
+ torch.manual_seed(manual_seed)
+ torch.cuda.manual_seed_all(manual_seed)
+ def worker_init_fn(worker_id):
+ random.seed(manual_seed + worker_id)
+ train_loader = torch.utils.data.DataLoader(point_data, batch_size=16, shuffle=True, num_workers=0, pin_memory=True, worker_init_fn=worker_init_fn)
+ for idx in range(4):
+ end = time.time()
+ for i, (input, target) in enumerate(train_loader):
+ print('time: {}/{}--{}'.format(i+1, len(train_loader), time.time() - end))
+ end = time.time()
diff --git a/zoo/PAConv/scene_seg/util/transform.py b/zoo/PAConv/scene_seg/util/transform.py
new file mode 100755
index 0000000..10115a1
--- /dev/null
+++ b/zoo/PAConv/scene_seg/util/transform.py
@@ -0,0 +1,234 @@
+import numpy as np
+
+import torch
+
+
+class Compose(object):
+ def __init__(self, transforms):
+ self.transforms = transforms
+
+ def __call__(self, points, color):
+ for t in self.transforms:
+ points, color= t(points, color)
+ return points, color
+
+ def __repr__(self):
+ return 'Compose(\n' + '\n'.join(['\t' + t.__repr__() + ',' for t in self.transforms]) + '\n)'
+
+
+class ToTensor(object):
+ def __call__(self, data, label):
+ data = torch.from_numpy(data)
+ if not isinstance(data, torch.FloatTensor):
+ data = data.float()
+ label = torch.from_numpy(label)
+ if not isinstance(label, torch.LongTensor):
+ label = label.long()
+ return data, label
+
+
+class RandomRotate(object):
+ def __init__(self, rotate_angle=None, along_z=True, color_rotate=False):
+ self.rotate_angle = rotate_angle
+ self.along_z = along_z
+ self.color_rotate = color_rotate
+
+ def __call__(self, points, color):
+ if self.rotate_angle is None:
+ rotate_angle = np.random.uniform() * 2 * np.pi
+ else:
+ rotate_angle = self.rotate_angle
+ cosval, sinval = np.cos(rotate_angle), np.sin(rotate_angle)
+ if self.along_z:
+ rotation_matrix = np.array([[cosval, sinval, 0], [-sinval, cosval, 0], [0, 0, 1]])
+ else:
+ rotation_matrix = np.array([[cosval, 0, sinval], [0, 1, 0], [-sinval, 0, cosval]])
+ points[:, 0:3] = np.dot(points[:, 0:3], rotation_matrix)
+ if self.color_rotate:
+ color[:, 0:3] = np.dot(color[:, 0:3], rotation_matrix)
+ return points, color
+
+ def __repr__(self):
+ return 'RandomRotate(rotate_angle: {}, along_z: {})'.format(self.rotate_angle, self.along_z)
+
+
+class RandomRotatePerturbation(object):
+ def __init__(self, angle_sigma=0.06, angle_clip=0.18):
+ self.angle_sigma = angle_sigma
+ self.angle_clip = angle_clip
+
+ def __call__(self, data, label):
+ angles = np.clip(self.angle_sigma*np.random.randn(3), -self.angle_clip, self.angle_clip)
+ Rx = np.array([[1, 0, 0],
+ [0, np.cos(angles[0]), -np.sin(angles[0])],
+ [0, np.sin(angles[0]), np.cos(angles[0])]])
+ Ry = np.array([[np.cos(angles[1]), 0, np.sin(angles[1])],
+ [0, 1, 0],
+ [-np.sin(angles[1]), 0, np.cos(angles[1])]])
+ Rz = np.array([[np.cos(angles[2]), -np.sin(angles[2]), 0],
+ [np.sin(angles[2]), np.cos(angles[2]), 0],
+ [0, 0, 1]])
+ R = np.dot(Rz, np.dot(Ry, Rx))
+ data[:, 0:3] = np.dot(data[:, 0:3], R)
+ if data.shape[1] > 3: # use normal
+ data[:, 3:6] = np.dot(data[:, 3:6], R)
+ return data, label
+
+
+class RandomScale(object):
+ def __init__(self, scale_low=0.8, scale_high=1.2):
+ self.scale_low = scale_low
+ self.scale_high = scale_high
+
+ def __call__(self, points, color):
+ scale = np.random.uniform(self.scale_low, self.scale_high)
+ points[:, 0:3] *= scale
+ return points, color
+
+ def __repr__(self):
+ return 'RandomScale(scale_low: {}, scale_high: {})'.format(self.scale_low, self.scale_high)
+
+
+class RandomShift(object):
+ def __init__(self, shift_range=0.1):
+ self.shift_range = shift_range
+
+ def __call__(self, points, color):
+ shift = np.random.uniform(-self.shift_range, self.shift_range, 3)
+ points[:, 0:3] += shift
+ return points, color
+
+ def __repr__(self):
+ return 'RandomShift(shift_range: {})'.format(self.shift_range)
+
+
+class RandomJitter(object):
+ def __init__(self, sigma=0.01, clip=0.05):
+ self.sigma = sigma
+ self.clip = clip
+
+ def __call__(self, points, color):
+ assert (self.clip > 0)
+ jitter = np.clip(self.sigma * np.random.randn(points.shape[0], 3), -1 * self.clip, self.clip)
+ points[:, 0:3] += jitter
+ return points, color
+
+ def __repr__(self):
+ return 'RandomJitter(sigma: {}, clip: {})'.format(self.sigma, self.clip)
+
+
+class ChromaticAutoContrast(object):
+ def __init__(self, p=0.2, blend_factor=None):
+ self.p = p
+ self.blend_factor = blend_factor
+
+ def __call__(self, points, color):
+ if np.random.rand() < self.p:
+ lo = np.min(color, axis=0, keepdims=True)
+ hi = np.max(color, axis=0, keepdims=True)
+ scale = 255 / (hi - lo)
+ contrast_color = (color - lo) * scale
+ blend_factor = np.random.rand() if self.blend_factor is None else self.blend_factor
+ color = (1 - blend_factor) * color + blend_factor * contrast_color
+ return points, color
+
+
+class ChromaticTranslation(object):
+ def __init__(self, p=0.95, ratio=0.05):
+ self.p = p
+ self.ratio = ratio
+
+ def __call__(self, points, color):
+ if np.random.rand() < self.p:
+ tr = (np.random.rand(1, 3) - 0.5) * 255 * 2 * self.ratio
+ color = np.clip(tr + color, 0, 255)
+ return points, color
+
+
+class ChromaticJitter(object):
+ def __init__(self, p=0.95, std=0.005):
+ self.p = p
+ self.std = std
+
+ def __call__(self, points, color):
+ if np.random.rand() < self.p:
+ noise = np.random.randn(color.shape[0], 3)
+ noise *= self.std * 255
+ color[:, :3] = np.clip(noise + color[:, :3], 0, 255)
+ return points, color
+
+
+class HueSaturationTranslation(object):
+ @staticmethod
+ def rgb_to_hsv(rgb):
+ # Translated from source of colorsys.rgb_to_hsv
+ # r,g,b should be a numpy arrays with values between 0 and 255
+ # rgb_to_hsv returns an array of floats between 0.0 and 1.0.
+ rgb = rgb.astype('float')
+ hsv = np.zeros_like(rgb)
+ # in case an RGBA array was passed, just copy the A channel
+ hsv[..., 3:] = rgb[..., 3:]
+ r, g, b = rgb[..., 0], rgb[..., 1], rgb[..., 2]
+ maxc = np.max(rgb[..., :3], axis=-1)
+ minc = np.min(rgb[..., :3], axis=-1)
+ hsv[..., 2] = maxc
+ mask = maxc != minc
+ hsv[mask, 1] = (maxc - minc)[mask] / maxc[mask]
+ rc = np.zeros_like(r)
+ gc = np.zeros_like(g)
+ bc = np.zeros_like(b)
+ rc[mask] = (maxc - r)[mask] / (maxc - minc)[mask]
+ gc[mask] = (maxc - g)[mask] / (maxc - minc)[mask]
+ bc[mask] = (maxc - b)[mask] / (maxc - minc)[mask]
+ hsv[..., 0] = np.select([r == maxc, g == maxc], [bc - gc, 2.0 + rc - bc], default=4.0 + gc - rc)
+ hsv[..., 0] = (hsv[..., 0] / 6.0) % 1.0
+ return hsv
+
+ @staticmethod
+ def hsv_to_rgb(hsv):
+ # Translated from source of colorsys.hsv_to_rgb
+ # h,s should be a numpy arrays with values between 0.0 and 1.0
+ # v should be a numpy array with values between 0.0 and 255.0
+ # hsv_to_rgb returns an array of uints between 0 and 255.
+ rgb = np.empty_like(hsv)
+ rgb[..., 3:] = hsv[..., 3:]
+ h, s, v = hsv[..., 0], hsv[..., 1], hsv[..., 2]
+ i = (h * 6.0).astype('uint8')
+ f = (h * 6.0) - i
+ p = v * (1.0 - s)
+ q = v * (1.0 - s * f)
+ t = v * (1.0 - s * (1.0 - f))
+ i = i % 6
+ conditions = [s == 0.0, i == 1, i == 2, i == 3, i == 4, i == 5]
+ rgb[..., 0] = np.select(conditions, [v, q, p, p, t, v], default=v)
+ rgb[..., 1] = np.select(conditions, [v, v, v, q, p, p], default=t)
+ rgb[..., 2] = np.select(conditions, [v, p, t, v, v, q], default=p)
+ return rgb.astype('uint8')
+
+ def __init__(self, hue_max=0.5, saturation_max=0.2):
+ self.hue_max = hue_max
+ self.saturation_max = saturation_max
+
+ def __call__(self, points, color):
+ # Assume color[:, :3] is rgb
+ hsv = HueSaturationTranslation.rgb_to_hsv(color[:, :3])
+ hue_val = (np.random.rand() - 0.5) * 2 * self.hue_max
+ sat_ratio = 1 + (np.random.rand() - 0.5) * 2 * self.saturation_max
+ hsv[..., 0] = np.remainder(hue_val + hsv[..., 0] + 1, 1)
+ hsv[..., 1] = np.clip(sat_ratio * hsv[..., 1], 0, 1)
+ color[:, :3] = np.clip(HueSaturationTranslation.hsv_to_rgb(hsv), 0, 255)
+ return points, color
+
+
+class RandomDropColor(object):
+ def __init__(self, p=0.8, color_augment=0.0):
+ self.p = p
+ self.color_augment = color_augment
+
+ def __call__(self, points, color):
+ if color is not None and np.random.rand() > self.p:
+ color *= self.color_augment
+ return points, color
+
+ def __repr__(self):
+ return 'RandomDropColor(color_augment: {}, p: {})'.format(self.color_augment, self.p)
\ No newline at end of file
diff --git a/zoo/PAConv/scene_seg/util/util.py b/zoo/PAConv/scene_seg/util/util.py
new file mode 100755
index 0000000..17195d2
--- /dev/null
+++ b/zoo/PAConv/scene_seg/util/util.py
@@ -0,0 +1,183 @@
+import os
+import numpy as np
+from PIL import Image
+import logging
+import argparse
+
+import torch
+from torch import nn
+from torch.nn.modules.conv import _ConvNd
+from torch.nn.modules.batchnorm import _BatchNorm
+import torch.nn.init as initer
+
+from . import config
+
+
+class AverageMeter(object):
+ """Computes and stores the average and current value"""
+ def __init__(self):
+ self.reset()
+
+ def reset(self):
+ self.val = 0
+ self.avg = 0
+ self.sum = 0
+ self.count = 0
+
+ def update(self, val, n=1):
+ self.val = val
+ self.sum += val * n
+ self.count += n
+ self.avg = self.sum / self.count
+
+
+def step_learning_rate(optimizer, base_lr, epoch, step_epoch, multiplier=0.1, clip=1e-6):
+ """Sets the learning rate to the base LR decayed by 10 every step epochs"""
+ lr = max(base_lr * (multiplier ** (epoch // step_epoch)), clip)
+ for param_group in optimizer.param_groups:
+ param_group['lr'] = lr
+
+
+def poly_learning_rate(optimizer, base_lr, curr_iter, max_iter, power=0.9):
+ """poly learning rate policy"""
+ lr = base_lr * (1 - float(curr_iter) / max_iter) ** power
+ for param_group in optimizer.param_groups:
+ param_group['lr'] = lr
+
+
+def intersectionAndUnion(output, target, K, ignore_index=255):
+ # 'K' classes, output and target sizes are N or N * L or N * H * W, each value in range 0 to K - 1.
+ assert (output.ndim in [1, 2, 3])
+ assert output.shape == target.shape
+ output = output.reshape(output.size).copy()
+ target = target.reshape(target.size)
+ output[np.where(target == ignore_index)[0]] = 255
+ intersection = output[np.where(output == target)[0]]
+ area_intersection, _ = np.histogram(intersection, bins=np.arange(K+1))
+ area_output, _ = np.histogram(output, bins=np.arange(K+1))
+ area_target, _ = np.histogram(target, bins=np.arange(K+1))
+ area_union = area_output + area_target - area_intersection
+ return area_intersection, area_union, area_target
+
+
+def intersectionAndUnionGPU(output, target, K, ignore_index=255):
+ # 'K' classes, output and target sizes are N or N * L or N * H * W, each value in range 0 to K - 1.
+ assert (output.dim() in [1, 2, 3])
+ assert output.shape == target.shape
+ output = output.view(-1)
+ target = target.view(-1)
+ output[target == ignore_index] = ignore_index
+ intersection = output[output == target]
+ # https://github.com/pytorch/pytorch/issues/1382
+ area_intersection = torch.histc(intersection.float().cpu(), bins=K, min=0, max=K-1)
+ area_output = torch.histc(output.float().cpu(), bins=K, min=0, max=K-1)
+ area_target = torch.histc(target.float().cpu(), bins=K, min=0, max=K-1)
+ area_union = area_output + area_target - area_intersection
+ return area_intersection.cuda(), area_union.cuda(), area_target.cuda()
+
+
+def check_mkdir(dir_name):
+ if not os.path.exists(dir_name):
+ os.mkdir(dir_name)
+
+
+def check_makedirs(dir_name):
+ if not os.path.exists(dir_name):
+ os.makedirs(dir_name)
+
+
+def init_weights(model, conv='kaiming', batchnorm='normal', linear='kaiming', lstm='kaiming'):
+ """
+ :param model: Pytorch Model which is nn.Module
+ :param conv: 'kaiming' or 'xavier'
+ :param batchnorm: 'normal' or 'constant'
+ :param linear: 'kaiming' or 'xavier'
+ :param lstm: 'kaiming' or 'xavier'
+ """
+ for m in model.modules():
+ if isinstance(m, (_ConvNd)):
+ if conv == 'kaiming':
+ initer.kaiming_normal_(m.weight)
+ elif conv == 'xavier':
+ initer.xavier_normal_(m.weight)
+ else:
+ raise ValueError("init type of conv error.\n")
+ if m.bias is not None:
+ initer.constant_(m.bias, 0)
+
+ elif isinstance(m, _BatchNorm):
+ if batchnorm == 'normal':
+ initer.normal_(m.weight, 1.0, 0.02)
+ elif batchnorm == 'constant':
+ initer.constant_(m.weight, 1.0)
+ else:
+ raise ValueError("init type of batchnorm error.\n")
+ initer.constant_(m.bias, 0.0)
+
+ elif isinstance(m, nn.Linear):
+ if linear == 'kaiming':
+ initer.kaiming_normal_(m.weight)
+ elif linear == 'xavier':
+ initer.xavier_normal_(m.weight)
+ else:
+ raise ValueError("init type of linear error.\n")
+ if m.bias is not None:
+ initer.constant_(m.bias, 0)
+
+ elif isinstance(m, nn.LSTM):
+ for name, param in m.named_parameters():
+ if 'weight' in name:
+ if lstm == 'kaiming':
+ initer.kaiming_normal_(param)
+ elif lstm == 'xavier':
+ initer.xavier_normal_(param)
+ else:
+ raise ValueError("init type of lstm error.\n")
+ elif 'bias' in name:
+ initer.constant_(param, 0)
+
+
+def convert_to_syncbn(model):
+ def recursive_set(cur_module, name, module):
+ if len(name.split('.')) > 1:
+ recursive_set(getattr(cur_module, name[:name.find('.')]), name[name.find('.')+1:], module)
+ else:
+ setattr(cur_module, name, module)
+ from lib.sync_bn import SynchronizedBatchNorm1d, SynchronizedBatchNorm2d, SynchronizedBatchNorm3d
+ for name, m in model.named_modules():
+ if isinstance(m, nn.BatchNorm1d):
+ recursive_set(model, name, SynchronizedBatchNorm1d(m.num_features, m.eps, m.momentum, m.affine))
+ elif isinstance(m, nn.BatchNorm2d):
+ recursive_set(model, name, SynchronizedBatchNorm2d(m.num_features, m.eps, m.momentum, m.affine))
+ elif isinstance(m, nn.BatchNorm3d):
+ recursive_set(model, name, SynchronizedBatchNorm3d(m.num_features, m.eps, m.momentum, m.affine))
+
+
+def colorize(gray, palette):
+ # gray: numpy array of the label and 1*3N size list palette
+ color = Image.fromarray(gray.astype(np.uint8)).convert('P')
+ color.putpalette(palette)
+ return color
+
+
+def get_parser():
+ parser = argparse.ArgumentParser(description='PAConv: Point Cloud Semantic Segmentation')
+ parser.add_argument('--config', type=str, default='config/s3dis/s3dis_pointnet2_paconv.yaml', help='config file')
+ parser.add_argument('opts', help='see config/s3dis/s3dis_pointnet2_paconv.yaml for all options', default=None, nargs=argparse.REMAINDER)
+ args = parser.parse_args()
+ assert args.config is not None
+ cfg = config.load_cfg_from_cfg_file(args.config)
+ if args.opts is not None:
+ cfg = config.merge_cfg_from_list(cfg, args.opts)
+ return cfg
+
+
+def get_logger():
+ logger_name = "main-logger"
+ logger = logging.getLogger(logger_name)
+ logger.setLevel(logging.INFO)
+ handler = logging.StreamHandler()
+ fmt = "[%(asctime)s %(levelname)s %(filename)s line %(lineno)d %(process)d] %(message)s"
+ handler.setFormatter(logging.Formatter(fmt))
+ logger.addHandler(handler)
+ return logger
diff --git a/zoo/PCT/.gitignore b/zoo/PCT/.gitignore
new file mode 100644
index 0000000..e003ca0
--- /dev/null
+++ b/zoo/PCT/.gitignore
@@ -0,0 +1,3 @@
+.idea
+*/__pycache__/
+*.pyc
\ No newline at end of file
diff --git a/zoo/PCT/GDANet_cls.py b/zoo/PCT/GDANet_cls.py
new file mode 100644
index 0000000..047e829
--- /dev/null
+++ b/zoo/PCT/GDANet_cls.py
@@ -0,0 +1,118 @@
+import torch.nn as nn
+import torch
+import torch.nn.functional as F
+from util_lib.GDANet_util import local_operator, GDM, SGCAM
+
+
+class GDANET(nn.Module):
+ def __init__(self):
+ super(GDANET, self).__init__()
+
+ self.bn1 = nn.BatchNorm2d(64, momentum=0.1)
+ self.bn11 = nn.BatchNorm2d(64, momentum=0.1)
+ self.bn12 = nn.BatchNorm1d(64, momentum=0.1)
+
+ self.bn2 = nn.BatchNorm2d(64, momentum=0.1)
+ self.bn21 = nn.BatchNorm2d(64, momentum=0.1)
+ self.bn22 = nn.BatchNorm1d(64, momentum=0.1)
+
+ self.bn3 = nn.BatchNorm2d(128, momentum=0.1)
+ self.bn31 = nn.BatchNorm2d(128, momentum=0.1)
+ self.bn32 = nn.BatchNorm1d(128, momentum=0.1)
+
+ self.bn4 = nn.BatchNorm1d(512, momentum=0.1)
+
+ self.conv1 = nn.Sequential(nn.Conv2d(6, 64, kernel_size=1, bias=True),
+ self.bn1)
+ self.conv11 = nn.Sequential(nn.Conv2d(64, 64, kernel_size=1, bias=True),
+ self.bn11)
+ self.conv12 = nn.Sequential(nn.Conv1d(64 * 2, 64, kernel_size=1, bias=True),
+ self.bn12)
+
+ self.conv2 = nn.Sequential(nn.Conv2d(67 * 2, 64, kernel_size=1, bias=True),
+ self.bn2)
+ self.conv21 = nn.Sequential(nn.Conv2d(64, 64, kernel_size=1, bias=True),
+ self.bn21)
+ self.conv22 = nn.Sequential(nn.Conv1d(64 * 2, 64, kernel_size=1, bias=True),
+ self.bn22)
+
+ self.conv3 = nn.Sequential(nn.Conv2d(131 * 2, 128, kernel_size=1, bias=True),
+ self.bn3)
+ self.conv31 = nn.Sequential(nn.Conv2d(128, 128, kernel_size=1, bias=True),
+ self.bn31)
+ self.conv32 = nn.Sequential(nn.Conv1d(128, 128, kernel_size=1, bias=True),
+ self.bn32)
+
+ self.conv4 = nn.Sequential(nn.Conv1d(256, 512, kernel_size=1, bias=True),
+ self.bn4)
+
+ self.SGCAM_1s = SGCAM(64)
+ self.SGCAM_1g = SGCAM(64)
+ self.SGCAM_2s = SGCAM(64)
+ self.SGCAM_2g = SGCAM(64)
+
+ self.linear1 = nn.Linear(1024, 512, bias=True)
+ self.bn6 = nn.BatchNorm1d(512)
+ self.dp1 = nn.Dropout(p=0.4)
+ self.linear2 = nn.Linear(512, 256, bias=True)
+ self.bn7 = nn.BatchNorm1d(256)
+ self.dp2 = nn.Dropout(p=0.4)
+ self.linear3 = nn.Linear(256, 40, bias=True)
+
+ def forward(self, x):
+ B, C, N = x.size()
+ ###############
+ """block 1"""
+ # Local operator:
+ x1 = local_operator(x, k=30)
+ x1 = F.relu(self.conv1(x1))
+ x1 = F.relu(self.conv11(x1))
+ x1 = x1.max(dim=-1, keepdim=False)[0]
+
+ # Geometry-Disentangle Module:
+ x1s, x1g = GDM(x1, M=256)
+
+ # Sharp-Gentle Complementary Attention Module:
+ y1s = self.SGCAM_1s(x1, x1s.transpose(2, 1))
+ y1g = self.SGCAM_1g(x1, x1g.transpose(2, 1))
+ z1 = torch.cat([y1s, y1g], 1)
+ z1 = F.relu(self.conv12(z1))
+ ###############
+ """block 2"""
+ x1t = torch.cat((x, z1), dim=1)
+ x2 = local_operator(x1t, k=30)
+ x2 = F.relu(self.conv2(x2))
+ x2 = F.relu(self.conv21(x2))
+ x2 = x2.max(dim=-1, keepdim=False)[0]
+
+ x2s, x2g = GDM(x2, M=256)
+
+ y2s = self.SGCAM_2s(x2, x2s.transpose(2, 1))
+ y2g = self.SGCAM_2g(x2, x2g.transpose(2, 1))
+ z2 = torch.cat([y2s, y2g], 1)
+ z2 = F.relu(self.conv22(z2))
+ ###############
+ x2t = torch.cat((x1t, z2), dim=1)
+ x3 = local_operator(x2t, k=30)
+ x3 = F.relu(self.conv3(x3))
+ x3 = F.relu(self.conv31(x3))
+ x3 = x3.max(dim=-1, keepdim=False)[0]
+ z3 = F.relu(self.conv32(x3))
+ ###############
+ x = torch.cat((z1, z2, z3), dim=1)
+ x = F.relu(self.conv4(x))
+ x11 = F.adaptive_max_pool1d(x, 1).view(B, -1)
+ x22 = F.adaptive_avg_pool1d(x, 1).view(B, -1)
+ x = torch.cat((x11, x22), 1)
+
+ x = F.relu(self.bn6(self.linear1(x)))
+ x = self.dp1(x)
+ x = F.relu(self.bn7(self.linear2(x)))
+ x = self.dp2(x)
+ x = self.linear3(x)
+ return x
+
+if __name__ == '__main__':
+ model = GDANET()
+ pytorch_total_params = sum(p.numel() for p in model.parameters())
+ print(pytorch_total_params/1e6)
\ No newline at end of file
diff --git a/zoo/PCT/LICENSE b/zoo/PCT/LICENSE
new file mode 100644
index 0000000..d838b49
--- /dev/null
+++ b/zoo/PCT/LICENSE
@@ -0,0 +1,21 @@
+MIT License
+
+Copyright (c) 2021 Strawberry-Eat-Mango
+
+Permission is hereby granted, free of charge, to any person obtaining a copy
+of this software and associated documentation files (the "Software"), to deal
+in the Software without restriction, including without limitation the rights
+to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
+copies of the Software, and to permit persons to whom the Software is
+furnished to do so, subject to the following conditions:
+
+The above copyright notice and this permission notice shall be included in all
+copies or substantial portions of the Software.
+
+THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
+AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
+OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
+SOFTWARE.
diff --git a/zoo/PCT/PointWOLF.py b/zoo/PCT/PointWOLF.py
new file mode 100644
index 0000000..cc1c415
--- /dev/null
+++ b/zoo/PCT/PointWOLF.py
@@ -0,0 +1,172 @@
+"""
+@origin : PointWOLF.py by {Sanghyeok Lee, Sihyeon Kim}
+@Contact: {cat0626, sh_bs15}@korea.ac.kr
+@Time: 2021.09.30
+"""
+
+import torch
+import torch.nn as nn
+import numpy as np
+
+class PointWOLF(object):
+ def __init__(self, args):
+ print("="*10 + "Using PointWolf" + "="*10)
+ self.num_anchor = args.w_num_anchor
+ self.sample_type = args.w_sample_type
+ self.sigma = args.w_sigma
+
+ self.R_range = (-abs(args.w_R_range), abs(args.w_R_range))
+ self.S_range = (1., args.w_S_range)
+ self.T_range = (-abs(args.w_T_range), abs(args.w_T_range))
+
+
+ def __call__(self, pos):
+ """
+ input :
+ pos([N,3])
+
+ output :
+ pos([N,3]) : original pointcloud
+ pos_new([N,3]) : Pointcloud augmneted by PointWOLF
+ """
+ M=self.num_anchor #(Mx3)
+ N, _=pos.shape #(N)
+
+ if self.sample_type == 'random':
+ idx = np.random.choice(N,M)#(M)
+ elif self.sample_type == 'fps':
+ idx = self.fps(pos, M) #(M)
+
+ pos_anchor = pos[idx] #(M,3), anchor point
+
+ pos_repeat = np.expand_dims(pos,0).repeat(M, axis=0)#(M,N,3)
+ pos_normalize = np.zeros_like(pos_repeat, dtype=pos.dtype) #(M,N,3)
+
+ #Move to canonical space
+ pos_normalize = pos_repeat - pos_anchor.reshape(M,-1,3)
+
+ #Local transformation at anchor point
+ pos_transformed = self.local_transformaton(pos_normalize) #(M,N,3)
+
+ #Move to origin space
+ pos_transformed = pos_transformed + pos_anchor.reshape(M,-1,3) #(M,N,3)
+
+ pos_new = self.kernel_regression(pos, pos_anchor, pos_transformed)
+ pos_new = self.normalize(pos_new)
+
+ return pos.astype('float32'), pos_new.astype('float32')
+
+
+ def kernel_regression(self, pos, pos_anchor, pos_transformed):
+ """
+ input :
+ pos([N,3])
+ pos_anchor([M,3])
+ pos_transformed([M,N,3])
+
+ output :
+ pos_new([N,3]) : Pointcloud after weighted local transformation
+ """
+ M, N, _ = pos_transformed.shape
+
+ #Distance between anchor points & entire points
+ sub = np.expand_dims(pos_anchor,1).repeat(N, axis=1) - np.expand_dims(pos,0).repeat(M, axis=0) #(M,N,3), d
+
+ project_axis = self.get_random_axis(1)
+
+ projection = np.expand_dims(project_axis, axis=1)*np.eye(3)#(1,3,3)
+
+ #Project distance
+ sub = sub @ projection # (M,N,3)
+ sub = np.sqrt(((sub) ** 2).sum(2)) #(M,N)
+
+ #Kernel regression
+ weight = np.exp(-0.5 * (sub ** 2) / (self.sigma ** 2)) #(M,N)
+ pos_new = (np.expand_dims(weight,2).repeat(3, axis=-1) * pos_transformed).sum(0) #(N,3)
+ pos_new = (pos_new / weight.sum(0, keepdims=True).T) # normalize by weight
+ return pos_new
+
+
+ def fps(self, pos, npoint):
+ """
+ input :
+ pos([N,3])
+ npoint(int)
+
+ output :
+ centroids([npoints]) : index list for fps
+ """
+ N, _ = pos.shape
+ centroids = np.zeros(npoint, dtype=np.int_) #(M)
+ distance = np.ones(N, dtype=np.float64) * 1e10 #(N)
+ farthest = np.random.randint(0, N, (1,), dtype=np.int_)
+ for i in range(npoint):
+ centroids[i] = farthest
+ centroid = pos[farthest, :]
+ dist = ((pos - centroid)**2).sum(-1)
+ mask = dist < distance
+ distance[mask] = dist[mask]
+ farthest = distance.argmax()
+ return centroids
+
+ def local_transformaton(self, pos_normalize):
+ """
+ input :
+ pos([N,3])
+ pos_normalize([M,N,3])
+
+ output :
+ pos_normalize([M,N,3]) : Pointclouds after local transformation centered at M anchor points.
+ """
+ M,N,_ = pos_normalize.shape
+ transformation_dropout = np.random.binomial(1, 0.5, (M,3)) #(M,3)
+ transformation_axis =self.get_random_axis(M) #(M,3)
+
+ degree = np.pi * np.random.uniform(*self.R_range, size=(M,3)) / 180.0 * transformation_dropout[:,0:1] #(M,3), sampling from (-R_range, R_range)
+
+ scale = np.random.uniform(*self.S_range, size=(M,3)) * transformation_dropout[:,1:2] #(M,3), sampling from (1, S_range)
+ scale = scale*transformation_axis
+ scale = scale + 1*(scale==0) #Scaling factor must be larger than 1
+
+ trl = np.random.uniform(*self.T_range, size=(M,3)) * transformation_dropout[:,2:3] #(M,3), sampling from (1, T_range)
+ trl = trl*transformation_axis
+
+ #Scaling Matrix
+ S = np.expand_dims(scale, axis=1)*np.eye(3) # scailing factor to diagonal matrix (M,3) -> (M,3,3)
+ #Rotation Matrix
+ sin = np.sin(degree)
+ cos = np.cos(degree)
+ sx, sy, sz = sin[:,0], sin[:,1], sin[:,2]
+ cx, cy, cz = cos[:,0], cos[:,1], cos[:,2]
+ R = np.stack([cz*cy, cz*sy*sx - sz*cx, cz*sy*cx + sz*sx,
+ sz*cy, sz*sy*sx + cz*cy, sz*sy*cx - cz*sx,
+ -sy, cy*sx, cy*cx], axis=1).reshape(M,3,3)
+
+ pos_normalize = pos_normalize@R@S + trl.reshape(M,1,3)
+ return pos_normalize
+
+ def get_random_axis(self, n_axis):
+ """
+ input :
+ n_axis(int)
+
+ output :
+ axis([n_axis,3]) : projection axis
+ """
+ axis = np.random.randint(1,8, (n_axis)) # 1(001):z, 2(010):y, 3(011):yz, 4(100):x, 5(101):xz, 6(110):xy, 7(111):xyz
+ m = 3
+ axis = (((axis[:,None] & (1 << np.arange(m)))) > 0).astype(int)
+ return axis
+
+ def normalize(self, pos):
+ """
+ input :
+ pos([N,3])
+
+ output :
+ pos([N,3]) : normalized Pointcloud
+ """
+ pos = pos - pos.mean(axis=-2, keepdims=True)
+ scale = (1 / np.sqrt((pos ** 2).sum(1)).max()) * 0.999999
+ pos = scale * pos
+ return pos
diff --git a/zoo/PCT/README.md b/zoo/PCT/README.md
new file mode 100644
index 0000000..f299162
--- /dev/null
+++ b/zoo/PCT/README.md
@@ -0,0 +1,48 @@
+## PCT: Point Cloud Transformer
+This is a Pytorch implementation of PCT: Point Cloud Transformer.
+
+Paper link: https://arxiv.org/pdf/2012.09688.pdf
+
+### Requirements
+python >= 3.7
+
+pytorch >= 1.6
+
+h5py
+
+scikit-learn
+
+and
+
+```shell script
+pip install pointnet2_ops_lib/.
+```
+The code is from https://github.com/erikwijmans/Pointnet2_PyTorch https://github.com/WangYueFt/dgcnn and https://github.com/MenghaoGuo/PCT
+
+### Models
+We get an accuracy of 93.2% on the ModelNet40(http://modelnet.cs.princeton.edu/) validation dataset
+
+The path of the model is in ./checkpoints/best/models/model.t7
+
+### Example training and testing
+```shell script
+# train
+python main.py --exp_name=train --num_points=1024 --use_sgd=True --batch_size 32 --epochs 250 --lr 0.0001
+
+# test
+python main.py --exp_name=test --num_points=1024 --use_sgd=True --eval=True --model_path=checkpoints/best/models/model.t7 --test_batch_size 8
+
+```
+
+### Citation
+If it is helpful for your work, please cite this paper:
+```latex
+@misc{guo2020pct,
+ title={PCT: Point Cloud Transformer},
+ author={Meng-Hao Guo and Jun-Xiong Cai and Zheng-Ning Liu and Tai-Jiang Mu and Ralph R. Martin and Shi-Min Hu},
+ year={2020},
+ eprint={2012.09688},
+ archivePrefix={arXiv},
+ primaryClass={cs.CV}
+}
+```
diff --git a/zoo/PCT/checkpoints/best/models/model.t7 b/zoo/PCT/checkpoints/best/models/model.t7
new file mode 100644
index 0000000..2ecf47a
Binary files /dev/null and b/zoo/PCT/checkpoints/best/models/model.t7 differ
diff --git a/zoo/PCT/data.py b/zoo/PCT/data.py
new file mode 100644
index 0000000..c0cb2ef
--- /dev/null
+++ b/zoo/PCT/data.py
@@ -0,0 +1,89 @@
+import os
+import glob
+import h5py
+import numpy as np
+from torch.utils.data import Dataset
+from PointWOLF import PointWOLF
+
+def download():
+ BASE_DIR = os.path.dirname(os.path.abspath(__file__))
+ DATA_DIR = os.path.join(BASE_DIR, 'data')
+ if not os.path.exists(DATA_DIR):
+ os.mkdir(DATA_DIR)
+ if not os.path.exists(os.path.join(DATA_DIR, 'modelnet40_ply_hdf5_2048')):
+ www = 'https://shapenet.cs.stanford.edu/media/modelnet40_ply_hdf5_2048.zip'
+ zipfile = os.path.basename(www)
+ os.system('wget %s; unzip %s' % (www, zipfile))
+ os.system('mv %s %s' % (zipfile[:-4], DATA_DIR))
+ os.system('rm %s' % (zipfile))
+
+def load_data(partition):
+ download()
+ BASE_DIR = os.path.dirname(os.path.abspath(__file__))
+ DATA_DIR = os.path.join(BASE_DIR, 'data')
+ all_data = []
+ all_label = []
+ for h5_name in glob.glob(os.path.join(DATA_DIR, 'modelnet40_ply_hdf5_2048', 'ply_data_%s*.h5'%partition)):
+ f = h5py.File(h5_name)
+ data = f['data'][:].astype('float32')
+ label = f['label'][:].astype('int64')
+ f.close()
+ all_data.append(data)
+ all_label.append(label)
+ all_data = np.concatenate(all_data, axis=0)
+ all_label = np.concatenate(all_label, axis=0)
+ return all_data, all_label
+
+def random_point_dropout(pc, max_dropout_ratio=0.875):
+ ''' batch_pc: BxNx3 '''
+ # for b in range(batch_pc.shape[0]):
+ dropout_ratio = np.random.random()*max_dropout_ratio # 0~0.875
+ drop_idx = np.where(np.random.random((pc.shape[0]))<=dropout_ratio)[0]
+ # print ('use random drop', len(drop_idx))
+
+ if len(drop_idx)>0:
+ pc[drop_idx,:] = pc[0,:] # set to the first point
+ return pc
+
+def translate_pointcloud(pointcloud):
+ xyz1 = np.random.uniform(low=2./3., high=3./2., size=[3])
+ xyz2 = np.random.uniform(low=-0.2, high=0.2, size=[3])
+
+ translated_pointcloud = np.add(np.multiply(pointcloud, xyz1), xyz2).astype('float32')
+ return translated_pointcloud
+
+def jitter_pointcloud(pointcloud, sigma=0.01, clip=0.02):
+ N, C = pointcloud.shape
+ pointcloud += np.clip(sigma * np.random.randn(N, C), -1*clip, clip)
+ return pointcloud
+
+
+class ModelNet40(Dataset):
+ def __init__(self, num_points, partition='train', args=None):
+ self.data, self.label = load_data(partition)
+ self.num_points = num_points
+ self.partition = partition
+ self.PointWOLF = PointWOLF(args) if args is not None else None
+
+
+ def __getitem__(self, item):
+ pointcloud = self.data[item][:self.num_points]
+ label = self.label[item]
+ if self.partition == 'train':
+ np.random.shuffle(pointcloud)
+ if self.PointWOLF is not None:
+ _, pointcloud = self.PointWOLF(pointcloud)
+ # pointcloud = random_point_dropout(pointcloud) # open for dgcnn not for our idea for all
+ # pointcloud = translate_pointcloud(pointcloud)
+ return pointcloud, label
+
+ def __len__(self):
+ return self.data.shape[0]
+
+
+if __name__ == '__main__':
+ train = ModelNet40(1024)
+ test = ModelNet40(1024, 'test')
+ for data, label in train:
+ print(data.shape)
+ print(label.shape)
diff --git a/zoo/PCT/main.py b/zoo/PCT/main.py
new file mode 100644
index 0000000..057af79
--- /dev/null
+++ b/zoo/PCT/main.py
@@ -0,0 +1,314 @@
+from __future__ import print_function
+import os
+import argparse
+import torch
+import torch.nn as nn
+import torch.optim as optim
+from torch.optim.lr_scheduler import CosineAnnealingLR
+from data import ModelNet40
+from model import Pct, RPC
+import numpy as np
+from torch.utils.data import DataLoader
+from util import cal_loss, IOStream
+import sklearn.metrics as metrics
+import rsmix_provider
+import time
+from modelnetc_utils import eval_corrupt_wrapper, ModelNetC
+
+
+def _init_():
+ if not os.path.exists('checkpoints'):
+ os.makedirs('checkpoints')
+ if not os.path.exists('checkpoints/' + args.exp_name):
+ os.makedirs('checkpoints/' + args.exp_name)
+ if not os.path.exists('checkpoints/' + args.exp_name + '/' + 'models'):
+ os.makedirs('checkpoints/' + args.exp_name + '/' + 'models')
+ os.system('cp main.py checkpoints' + '/' + args.exp_name + '/' + 'main.py.backup')
+ os.system('cp model.py checkpoints' + '/' + args.exp_name + '/' + 'model.py.backup')
+ os.system('cp util.py checkpoints' + '/' + args.exp_name + '/' + 'util.py.backup')
+ os.system('cp data.py checkpoints' + '/' + args.exp_name + '/' + 'data.py.backup')
+
+
+def train(args, io):
+ train_loader = DataLoader(ModelNet40(partition='train', num_points=args.num_points, args=args if args.pw else None),
+ num_workers=8,
+ batch_size=args.batch_size, shuffle=True, drop_last=True)
+ test_loader = DataLoader(ModelNet40(partition='test', num_points=args.num_points), num_workers=8,
+ batch_size=args.test_batch_size, shuffle=True, drop_last=False)
+
+ device = torch.device("cuda" if args.cuda else "cpu")
+
+ if args.model == 'RPC':
+ model = RPC(args).to(device)
+ else:
+ model = Pct(args).to(device)
+ print(str(model))
+ model = nn.DataParallel(model)
+
+ if args.use_sgd:
+ print("Use SGD")
+ opt = optim.SGD(model.parameters(), lr=args.lr * 100, momentum=args.momentum, weight_decay=5e-4)
+ else:
+ print("Use Adam")
+ opt = optim.Adam(model.parameters(), lr=args.lr, weight_decay=1e-4)
+
+ scheduler = CosineAnnealingLR(opt, args.epochs, eta_min=args.lr)
+
+ criterion = cal_loss
+ best_test_acc = 0
+
+ for epoch in range(args.epochs):
+ scheduler.step()
+ train_loss = 0.0
+ count = 0.0
+ model.train()
+ train_pred = []
+ train_true = []
+ idx = 0
+ total_time = 0.0
+ for data, label in train_loader:
+ '''
+ implement augmentation
+ '''
+
+ rsmix = False
+ r = np.random.rand(1)
+ if args.beta > 0 and r < args.rsmix_prob:
+ rsmix = True
+ data, lam, label, label_b = rsmix_provider.rsmix(data, label, beta=args.beta, n_sample=args.nsample,
+ KNN=args.knn)
+ if args.rot or args.rdscale or args.shift or args.jitter or args.shuffle or args.rddrop or (
+ args.beta is not 0.0):
+ data = torch.FloatTensor(data)
+ if rsmix:
+ lam = torch.FloatTensor(lam)
+ lam, label_b = lam.to(device), label_b.to(device).squeeze()
+
+ data, label = data.to(device), label.to(device).squeeze()
+ if rsmix:
+ data = data.permute(0, 2, 1)
+ batch_size = data.size()[0]
+ opt.zero_grad()
+
+ start_time = time.time()
+ logits = model(data)
+ loss = 0
+ for i in range(batch_size):
+ loss_tmp = criterion(logits[i].unsqueeze(0), label[i].unsqueeze(0).long()) * (1 - lam[i]) \
+ + criterion(logits[i].unsqueeze(0), label_b[i].unsqueeze(0).long()) * lam[i]
+ loss += loss_tmp
+ loss = loss / batch_size
+ else:
+ data = data.permute(0, 2, 1)
+ batch_size = data.size()[0]
+ opt.zero_grad()
+ start_time = time.time()
+ logits = model(data)
+ loss = criterion(logits, label)
+
+ loss.backward()
+ opt.step()
+ end_time = time.time()
+ total_time += (end_time - start_time)
+
+ preds = logits.max(dim=1)[1]
+ count += batch_size
+ train_loss += loss.item() * batch_size
+ train_true.append(label.cpu().numpy())
+ train_pred.append(preds.detach().cpu().numpy())
+
+ print('train total time is', total_time)
+ train_true = np.concatenate(train_true)
+ train_pred = np.concatenate(train_pred)
+ outstr = 'Train %d, loss: %.6f, train acc: %.6f, train avg acc: %.6f' % (epoch,
+ train_loss * 1.0 / count,
+ metrics.accuracy_score(
+ train_true, train_pred),
+ metrics.balanced_accuracy_score(
+ train_true, train_pred))
+ io.cprint(outstr)
+
+ ####################
+ # Test
+ ####################
+ test_loss = 0.0
+ count = 0.0
+ model.eval()
+ test_pred = []
+ test_true = []
+ total_time = 0.0
+ for data, label in test_loader:
+ data, label = data.to(device), label.to(device).squeeze()
+ data = data.permute(0, 2, 1)
+ batch_size = data.size()[0]
+ start_time = time.time()
+ logits = model(data)
+ end_time = time.time()
+ total_time += (end_time - start_time)
+ loss = criterion(logits, label)
+ preds = logits.max(dim=1)[1]
+ count += batch_size
+ test_loss += loss.item() * batch_size
+ test_true.append(label.cpu().numpy())
+ test_pred.append(preds.detach().cpu().numpy())
+ print('test total time is', total_time)
+ test_true = np.concatenate(test_true)
+ test_pred = np.concatenate(test_pred)
+ test_acc = metrics.accuracy_score(test_true, test_pred)
+ avg_per_class_acc = metrics.balanced_accuracy_score(test_true, test_pred)
+ outstr = 'Test %d, loss: %.6f, test acc: %.6f, test avg acc: %.6f' % (epoch,
+ test_loss * 1.0 / count,
+ test_acc,
+ avg_per_class_acc)
+ io.cprint(outstr)
+ if test_acc >= best_test_acc:
+ best_test_acc = test_acc
+ torch.save(model.state_dict(), 'checkpoints/%s/models/model.t7' % args.exp_name)
+ torch.save(model.state_dict(), 'checkpoints/%s/models/model_final.t7' % args.exp_name)
+
+
+def test(args, io):
+ test_loader = DataLoader(ModelNet40(partition='test', num_points=args.num_points),
+ batch_size=args.test_batch_size, shuffle=True, drop_last=False)
+
+ device = torch.device("cuda" if args.cuda else "cpu")
+
+ model = Pct(args).to(device)
+ model = nn.DataParallel(model)
+
+ model.load_state_dict(torch.load(args.model_path))
+ model = model.eval()
+ test_true = []
+ test_pred = []
+
+ for data, label in test_loader:
+ data, label = data.to(device), label.to(device).squeeze()
+ data = data.permute(0, 2, 1)
+ logits = model(data)
+ preds = logits.max(dim=1)[1]
+ if args.test_batch_size == 1:
+ test_true.append([label.cpu().numpy()])
+ test_pred.append([preds.detach().cpu().numpy()])
+ else:
+ test_true.append(label.cpu().numpy())
+ test_pred.append(preds.detach().cpu().numpy())
+
+ test_true = np.concatenate(test_true)
+ test_pred = np.concatenate(test_pred)
+ test_acc = metrics.accuracy_score(test_true, test_pred)
+ avg_per_class_acc = metrics.balanced_accuracy_score(test_true, test_pred)
+ outstr = 'Test :: test acc: %.6f, test avg acc: %.6f' % (test_acc, avg_per_class_acc)
+ io.cprint(outstr)
+
+
+if __name__ == "__main__":
+ # Training settings
+ parser = argparse.ArgumentParser(description='Point Cloud Recognition')
+ parser.add_argument('--exp_name', type=str, default='exp', metavar='N',
+ help='Name of the experiment')
+ parser.add_argument('--dataset', type=str, default='modelnet40', metavar='N',
+ choices=['modelnet40'])
+ parser.add_argument('--batch_size', type=int, default=32, metavar='batch_size',
+ help='Size of batch)')
+ parser.add_argument('--test_batch_size', type=int, default=16, metavar='batch_size',
+ help='Size of batch)')
+ parser.add_argument('--epochs', type=int, default=250, metavar='N',
+ help='number of episode to train ')
+ parser.add_argument('--use_sgd', type=bool, default=True,
+ help='Use SGD')
+ parser.add_argument('--lr', type=float, default=0.0001, metavar='LR',
+ help='learning rate (default: 0.001, 0.1 if using sgd)')
+ parser.add_argument('--momentum', type=float, default=0.9, metavar='M',
+ help='SGD momentum (default: 0.9)')
+ parser.add_argument('--no_cuda', type=bool, default=False,
+ help='enables CUDA training')
+ parser.add_argument('--seed', type=int, default=1, metavar='S',
+ help='random seed (default: 1)')
+ parser.add_argument('--eval', type=bool, default=False,
+ help='evaluate the model')
+ parser.add_argument('--eval_corrupt', type=bool, default=False,
+ help='evaluate the model under corruption')
+ parser.add_argument('--num_points', type=int, default=1024,
+ help='num of points to use')
+ parser.add_argument('--dropout', type=float, default=0.5,
+ help='dropout rate')
+ parser.add_argument('--model_path', type=str, default='', metavar='N',
+ help='Pretrained model path')
+ parser.add_argument('--model', type=str, default='PCT', choices=['RPC', 'PCT'], help='choose model')
+
+ # pointwolf
+ parser.add_argument('--pw', action='store_true', help='use PointWOLF')
+ parser.add_argument('--w_num_anchor', type=int, default=4, help='Num of anchor point')
+ parser.add_argument('--w_sample_type', type=str, default='fps',
+ help='Sampling method for anchor point, option : (fps, random)')
+ parser.add_argument('--w_sigma', type=float, default=0.5, help='Kernel bandwidth')
+
+ parser.add_argument('--w_R_range', type=float, default=10, help='Maximum rotation range of local transformation')
+ parser.add_argument('--w_S_range', type=float, default=3, help='Maximum scailing range of local transformation')
+ parser.add_argument('--w_T_range', type=float, default=0.25,
+ help='Maximum translation range of local transformation')
+
+ # rsmix
+ parser.add_argument('--rdscale', action='store_true', help='random scaling data augmentation')
+ parser.add_argument('--shift', action='store_true', help='random shift data augmentation')
+ parser.add_argument('--shuffle', action='store_true', help='random shuffle data augmentation')
+ parser.add_argument('--rot', action='store_true', help='random rotation augmentation')
+ parser.add_argument('--jitter', action='store_true', help='jitter augmentation')
+ parser.add_argument('--rddrop', action='store_true', help='random point drop data augmentation')
+ parser.add_argument('--rsmix_prob', type=float, default=0.5, help='rsmix probability')
+ parser.add_argument('--beta', type=float, default=0.0, help='scalar value for beta function')
+ parser.add_argument('--nsample', type=float, default=512,
+ help='default max sample number of the erased or added points in rsmix')
+ parser.add_argument('--knn', action='store_true', help='use knn instead ball-query function')
+
+ args = parser.parse_args()
+
+ _init_()
+
+ io = IOStream('checkpoints/' + args.exp_name + '/run.log')
+ io.cprint(str(args))
+
+ args.cuda = not args.no_cuda and torch.cuda.is_available()
+ torch.manual_seed(args.seed)
+ if args.cuda:
+ io.cprint(
+ 'Using GPU : ' + str(torch.cuda.current_device()) + ' from ' + str(torch.cuda.device_count()) + ' devices')
+ torch.cuda.manual_seed(args.seed)
+ else:
+ io.cprint('Using CPU')
+
+ if not args.eval and not args.eval_corrupt:
+ train(args, io)
+ elif args.eval:
+ test(args, io)
+ elif args.eval_corrupt:
+ device = torch.device("cuda" if args.cuda else "cpu")
+ if args.model == 'RPC':
+ model = RPC(args).to(device)
+ else:
+ model = Pct(args).to(device)
+ model = nn.DataParallel(model)
+ model.load_state_dict(torch.load(args.model_path))
+ model = model.eval()
+
+
+ def test_corrupt(args, split, model):
+ test_loader = DataLoader(ModelNetC(split=split),
+ batch_size=args.test_batch_size, shuffle=True, drop_last=False)
+ test_true = []
+ test_pred = []
+ for data, label in test_loader:
+ data, label = data.to(device), label.to(device).squeeze()
+ data = data.permute(0, 2, 1)
+ logits = model(data)
+ preds = logits.max(dim=1)[1]
+ test_true.append(label.cpu().numpy())
+ test_pred.append(preds.detach().cpu().numpy())
+ test_true = np.concatenate(test_true)
+ test_pred = np.concatenate(test_pred)
+ test_acc = metrics.accuracy_score(test_true, test_pred)
+ avg_per_class_acc = metrics.balanced_accuracy_score(test_true, test_pred)
+ return {'acc': test_acc, 'avg_per_class_acc': avg_per_class_acc}
+
+
+ eval_corrupt_wrapper(model, test_corrupt, {'args': args})
diff --git a/zoo/PCT/model.py b/zoo/PCT/model.py
new file mode 100644
index 0000000..8173294
--- /dev/null
+++ b/zoo/PCT/model.py
@@ -0,0 +1,201 @@
+import torch
+import torch.nn as nn
+import torch.nn.functional as F
+from util import sample_and_group
+from GDANet_cls import GDM, local_operator, SGCAM
+
+class Local_op(nn.Module):
+ def __init__(self, in_channels, out_channels):
+ super(Local_op, self).__init__()
+ self.conv1 = nn.Conv1d(in_channels, out_channels, kernel_size=1, bias=False)
+ self.conv2 = nn.Conv1d(out_channels, out_channels, kernel_size=1, bias=False)
+ self.bn1 = nn.BatchNorm1d(out_channels)
+ self.bn2 = nn.BatchNorm1d(out_channels)
+
+ def forward(self, x):
+ b, n, s, d = x.size() # torch.Size([32, 512, 32, 6])
+ x = x.permute(0, 1, 3, 2)
+ x = x.reshape(-1, d, s)
+ batch_size, _, N = x.size()
+ x = F.relu(self.bn1(self.conv1(x))) # B, D, N
+ x = F.relu(self.bn2(self.conv2(x))) # B, D, N
+ x = F.adaptive_max_pool1d(x, 1).view(batch_size, -1)
+ x = x.reshape(b, n, -1).permute(0, 2, 1)
+ return x
+
+
+class RPC(nn.Module):
+ def __init__(self, args, output_channels=40):
+ super(RPC, self).__init__()
+ self.args = args
+
+ self.bn1 = nn.BatchNorm2d(64, momentum=0.1)
+ self.bn11 = nn.BatchNorm2d(128, momentum=0.1)
+ self.bn12 = nn.BatchNorm1d(256, momentum=0.1)
+
+ self.conv1 = nn.Sequential(nn.Conv2d(6, 64, kernel_size=1, bias=True),
+ self.bn1)
+ self.conv11 = nn.Sequential(nn.Conv2d(64, 128, kernel_size=1, bias=True),
+ self.bn11)
+ self.SGCAM_1s = SGCAM(128)
+ self.SGCAM_1g = SGCAM(128)
+
+ self.pt_last = Point_Transformer_Last(args)
+
+ self.conv_fuse = nn.Sequential(nn.Conv1d(1280, 1024, kernel_size=1, bias=False),
+ nn.BatchNorm1d(1024),
+ nn.LeakyReLU(negative_slope=0.2))
+
+ self.linear1 = nn.Linear(1024, 512, bias=False)
+ self.bn6 = nn.BatchNorm1d(512)
+ self.dp1 = nn.Dropout(p=args.dropout)
+ self.linear2 = nn.Linear(512, 256)
+ self.bn7 = nn.BatchNorm1d(256)
+ self.dp2 = nn.Dropout(p=args.dropout)
+ self.linear3 = nn.Linear(256, output_channels)
+
+ def forward(self, x):
+ batch_size, _, _ = x.size()
+
+ x1 = local_operator(x, k=30)
+ x1 = F.relu(self.conv1(x1))
+ x1 = F.relu(self.conv11(x1))
+ x1 = x1.max(dim=-1, keepdim=False)[0]
+
+ # Geometry-Disentangle Module:
+ x1s, x1g = GDM(x1, M=256)
+
+ # Sharp-Gentle Complementary Attention Module:
+ y1s = self.SGCAM_1s(x1, x1s.transpose(2, 1))
+ y1g = self.SGCAM_1g(x1, x1g.transpose(2, 1))
+ feature_1 = torch.cat([y1s, y1g], 1)
+
+ x = self.pt_last(feature_1)
+ x = torch.cat([x, feature_1], dim=1)
+ x = self.conv_fuse(x)
+ x = F.adaptive_max_pool1d(x, 1).view(batch_size, -1)
+ x = F.leaky_relu(self.bn6(self.linear1(x)), negative_slope=0.2)
+ x = self.dp1(x)
+ x = F.leaky_relu(self.bn7(self.linear2(x)), negative_slope=0.2)
+ x = self.dp2(x)
+ x = self.linear3(x)
+
+ return x
+
+class Pct(nn.Module):
+ def __init__(self, args, output_channels=40):
+ super(Pct, self).__init__()
+ self.args = args
+ self.conv1 = nn.Conv1d(3, 64, kernel_size=1, bias=False)
+ self.conv2 = nn.Conv1d(64, 64, kernel_size=1, bias=False)
+ self.bn1 = nn.BatchNorm1d(64)
+ self.bn2 = nn.BatchNorm1d(64)
+ self.gather_local_0 = Local_op(in_channels=128, out_channels=128)
+ self.gather_local_1 = Local_op(in_channels=256, out_channels=256)
+
+ self.pt_last = Point_Transformer_Last(args)
+
+ self.conv_fuse = nn.Sequential(nn.Conv1d(1280, 1024, kernel_size=1, bias=False),
+ nn.BatchNorm1d(1024),
+ nn.LeakyReLU(negative_slope=0.2))
+
+
+ self.linear1 = nn.Linear(1024, 512, bias=False)
+ self.bn6 = nn.BatchNorm1d(512)
+ self.dp1 = nn.Dropout(p=args.dropout)
+ self.linear2 = nn.Linear(512, 256)
+ self.bn7 = nn.BatchNorm1d(256)
+ self.dp2 = nn.Dropout(p=args.dropout)
+ self.linear3 = nn.Linear(256, output_channels)
+
+ def forward(self, x):
+ xyz = x.permute(0, 2, 1)
+ batch_size, _, _ = x.size()
+ # B, D, N
+ x = F.relu(self.bn1(self.conv1(x)))
+ # B, D, N
+ x = F.relu(self.bn2(self.conv2(x)))
+ x = x.permute(0, 2, 1)
+ new_xyz, new_feature = sample_and_group(npoint=512, radius=0.15, nsample=32, xyz=xyz, points=x)
+ feature_0 = self.gather_local_0(new_feature)
+ feature = feature_0.permute(0, 2, 1)
+ new_xyz, new_feature = sample_and_group(npoint=256, radius=0.2, nsample=32, xyz=new_xyz, points=feature)
+ feature_1 = self.gather_local_1(new_feature)
+
+ x = self.pt_last(feature_1)
+ x = torch.cat([x, feature_1], dim=1)
+ x = self.conv_fuse(x)
+ x = F.adaptive_max_pool1d(x, 1).view(batch_size, -1)
+ x = F.leaky_relu(self.bn6(self.linear1(x)), negative_slope=0.2)
+ x = self.dp1(x)
+ x = F.leaky_relu(self.bn7(self.linear2(x)), negative_slope=0.2)
+ x = self.dp2(x)
+ x = self.linear3(x)
+
+ return x
+
+class Point_Transformer_Last(nn.Module):
+ def __init__(self, args, channels=256):
+ super(Point_Transformer_Last, self).__init__()
+ self.args = args
+ self.conv1 = nn.Conv1d(channels, channels, kernel_size=1, bias=False)
+ self.conv2 = nn.Conv1d(channels, channels, kernel_size=1, bias=False)
+
+ self.bn1 = nn.BatchNorm1d(channels)
+ self.bn2 = nn.BatchNorm1d(channels)
+
+ self.sa1 = SA_Layer(channels)
+ self.sa2 = SA_Layer(channels)
+ self.sa3 = SA_Layer(channels)
+ self.sa4 = SA_Layer(channels)
+
+ def forward(self, x):
+ #
+ # b, 3, npoint, nsample
+ # conv2d 3 -> 128 channels 1, 1
+ # b * npoint, c, nsample
+ # permute reshape
+ batch_size, _, N = x.size()
+
+ # B, D, N
+ x = F.relu(self.bn1(self.conv1(x)))
+ x = F.relu(self.bn2(self.conv2(x)))
+ x1 = self.sa1(x)
+ x2 = self.sa2(x1)
+ x3 = self.sa3(x2)
+ x4 = self.sa4(x3)
+ x = torch.cat((x1, x2, x3, x4), dim=1)
+
+ return x
+
+class SA_Layer(nn.Module):
+ def __init__(self, channels):
+ super(SA_Layer, self).__init__()
+ self.q_conv = nn.Conv1d(channels, channels // 4, 1, bias=False)
+ self.k_conv = nn.Conv1d(channels, channels // 4, 1, bias=False)
+ self.q_conv.weight = self.k_conv.weight
+ self.q_conv.bias = self.k_conv.bias
+
+ self.v_conv = nn.Conv1d(channels, channels, 1)
+ self.trans_conv = nn.Conv1d(channels, channels, 1)
+ self.after_norm = nn.BatchNorm1d(channels)
+ self.act = nn.ReLU()
+ self.softmax = nn.Softmax(dim=-1)
+
+ def forward(self, x):
+ # b, n, c
+ x_q = self.q_conv(x).permute(0, 2, 1)
+ # b, c, n
+ x_k = self.k_conv(x)
+ x_v = self.v_conv(x)
+ # b, n, n
+ energy = torch.bmm(x_q, x_k)
+
+ attention = self.softmax(energy)
+ attention = attention / (1e-9 + attention.sum(dim=1, keepdim=True))
+ # b, c, n
+ x_r = torch.bmm(x_v, attention)
+ x_r = self.act(self.after_norm(self.trans_conv(x - x_r)))
+ x = x + x_r
+ return x
+
diff --git a/zoo/PCT/pointnet2_ops_lib/MANIFEST.in b/zoo/PCT/pointnet2_ops_lib/MANIFEST.in
new file mode 100644
index 0000000..a4eb5de
--- /dev/null
+++ b/zoo/PCT/pointnet2_ops_lib/MANIFEST.in
@@ -0,0 +1 @@
+graft pointnet2_ops/_ext-src
diff --git a/zoo/PCT/pointnet2_ops_lib/pointnet2_ops/__init__.py b/zoo/PCT/pointnet2_ops_lib/pointnet2_ops/__init__.py
new file mode 100644
index 0000000..5fd361f
--- /dev/null
+++ b/zoo/PCT/pointnet2_ops_lib/pointnet2_ops/__init__.py
@@ -0,0 +1,3 @@
+import pointnet2_ops.pointnet2_modules
+import pointnet2_ops.pointnet2_utils
+from pointnet2_ops._version import __version__
diff --git a/zoo/PCT/pointnet2_ops_lib/pointnet2_ops/_ext-src/include/ball_query.h b/zoo/PCT/pointnet2_ops_lib/pointnet2_ops/_ext-src/include/ball_query.h
new file mode 100644
index 0000000..1bbc638
--- /dev/null
+++ b/zoo/PCT/pointnet2_ops_lib/pointnet2_ops/_ext-src/include/ball_query.h
@@ -0,0 +1,5 @@
+#pragma once
+#include
+
+at::Tensor ball_query(at::Tensor new_xyz, at::Tensor xyz, const float radius,
+ const int nsample);
diff --git a/zoo/PCT/pointnet2_ops_lib/pointnet2_ops/_ext-src/include/cuda_utils.h b/zoo/PCT/pointnet2_ops_lib/pointnet2_ops/_ext-src/include/cuda_utils.h
new file mode 100644
index 0000000..0fd5b6e
--- /dev/null
+++ b/zoo/PCT/pointnet2_ops_lib/pointnet2_ops/_ext-src/include/cuda_utils.h
@@ -0,0 +1,41 @@
+#ifndef _CUDA_UTILS_H
+#define _CUDA_UTILS_H
+
+#include
+#include
+#include
+
+#include
+#include
+
+#include
+
+#define TOTAL_THREADS 512
+
+inline int opt_n_threads(int work_size) {
+ const int pow_2 = std::log(static_cast(work_size)) / std::log(2.0);
+
+ return max(min(1 << pow_2, TOTAL_THREADS), 1);
+}
+
+inline dim3 opt_block_config(int x, int y) {
+ const int x_threads = opt_n_threads(x);
+ const int y_threads =
+ max(min(opt_n_threads(y), TOTAL_THREADS / x_threads), 1);
+ dim3 block_config(x_threads, y_threads, 1);
+
+ return block_config;
+}
+
+#define CUDA_CHECK_ERRORS() \
+ do { \
+ cudaError_t err = cudaGetLastError(); \
+ if (cudaSuccess != err) { \
+ fprintf(stderr, "CUDA kernel failed : %s\n%s at L:%d in %s\n", \
+ cudaGetErrorString(err), __PRETTY_FUNCTION__, __LINE__, \
+ __FILE__); \
+ exit(-1); \
+ } \
+ } while (0)
+
+#endif
diff --git a/zoo/PCT/pointnet2_ops_lib/pointnet2_ops/_ext-src/include/group_points.h b/zoo/PCT/pointnet2_ops_lib/pointnet2_ops/_ext-src/include/group_points.h
new file mode 100644
index 0000000..ad20cda
--- /dev/null
+++ b/zoo/PCT/pointnet2_ops_lib/pointnet2_ops/_ext-src/include/group_points.h
@@ -0,0 +1,5 @@
+#pragma once
+#include
+
+at::Tensor group_points(at::Tensor points, at::Tensor idx);
+at::Tensor group_points_grad(at::Tensor grad_out, at::Tensor idx, const int n);
diff --git a/zoo/PCT/pointnet2_ops_lib/pointnet2_ops/_ext-src/include/interpolate.h b/zoo/PCT/pointnet2_ops_lib/pointnet2_ops/_ext-src/include/interpolate.h
new file mode 100644
index 0000000..26b3464
--- /dev/null
+++ b/zoo/PCT/pointnet2_ops_lib/pointnet2_ops/_ext-src/include/interpolate.h
@@ -0,0 +1,10 @@
+#pragma once
+
+#include
+#include
+
+std::vector three_nn(at::Tensor unknowns, at::Tensor knows);
+at::Tensor three_interpolate(at::Tensor points, at::Tensor idx,
+ at::Tensor weight);
+at::Tensor three_interpolate_grad(at::Tensor grad_out, at::Tensor idx,
+ at::Tensor weight, const int m);
diff --git a/zoo/PCT/pointnet2_ops_lib/pointnet2_ops/_ext-src/include/sampling.h b/zoo/PCT/pointnet2_ops_lib/pointnet2_ops/_ext-src/include/sampling.h
new file mode 100644
index 0000000..d795271
--- /dev/null
+++ b/zoo/PCT/pointnet2_ops_lib/pointnet2_ops/_ext-src/include/sampling.h
@@ -0,0 +1,6 @@
+#pragma once
+#include
+
+at::Tensor gather_points(at::Tensor points, at::Tensor idx);
+at::Tensor gather_points_grad(at::Tensor grad_out, at::Tensor idx, const int n);
+at::Tensor furthest_point_sampling(at::Tensor points, const int nsamples);
diff --git a/zoo/PCT/pointnet2_ops_lib/pointnet2_ops/_ext-src/include/utils.h b/zoo/PCT/pointnet2_ops_lib/pointnet2_ops/_ext-src/include/utils.h
new file mode 100644
index 0000000..5f080ed
--- /dev/null
+++ b/zoo/PCT/pointnet2_ops_lib/pointnet2_ops/_ext-src/include/utils.h
@@ -0,0 +1,25 @@
+#pragma once
+#include
+#include
+
+#define CHECK_CUDA(x) \
+ do { \
+ AT_ASSERT(x.is_cuda(), #x " must be a CUDA tensor"); \
+ } while (0)
+
+#define CHECK_CONTIGUOUS(x) \
+ do { \
+ AT_ASSERT(x.is_contiguous(), #x " must be a contiguous tensor"); \
+ } while (0)
+
+#define CHECK_IS_INT(x) \
+ do { \
+ AT_ASSERT(x.scalar_type() == at::ScalarType::Int, \
+ #x " must be an int tensor"); \
+ } while (0)
+
+#define CHECK_IS_FLOAT(x) \
+ do { \
+ AT_ASSERT(x.scalar_type() == at::ScalarType::Float, \
+ #x " must be a float tensor"); \
+ } while (0)
diff --git a/zoo/PCT/pointnet2_ops_lib/pointnet2_ops/_ext-src/src/ball_query.cpp b/zoo/PCT/pointnet2_ops_lib/pointnet2_ops/_ext-src/src/ball_query.cpp
new file mode 100644
index 0000000..b1797c1
--- /dev/null
+++ b/zoo/PCT/pointnet2_ops_lib/pointnet2_ops/_ext-src/src/ball_query.cpp
@@ -0,0 +1,32 @@
+#include "ball_query.h"
+#include "utils.h"
+
+void query_ball_point_kernel_wrapper(int b, int n, int m, float radius,
+ int nsample, const float *new_xyz,
+ const float *xyz, int *idx);
+
+at::Tensor ball_query(at::Tensor new_xyz, at::Tensor xyz, const float radius,
+ const int nsample) {
+ CHECK_CONTIGUOUS(new_xyz);
+ CHECK_CONTIGUOUS(xyz);
+ CHECK_IS_FLOAT(new_xyz);
+ CHECK_IS_FLOAT(xyz);
+
+ if (new_xyz.is_cuda()) {
+ CHECK_CUDA(xyz);
+ }
+
+ at::Tensor idx =
+ torch::zeros({new_xyz.size(0), new_xyz.size(1), nsample},
+ at::device(new_xyz.device()).dtype(at::ScalarType::Int));
+
+ if (new_xyz.is_cuda()) {
+ query_ball_point_kernel_wrapper(xyz.size(0), xyz.size(1), new_xyz.size(1),
+ radius, nsample, new_xyz.data_ptr(),
+ xyz.data_ptr(), idx.data_ptr());
+ } else {
+ AT_ASSERT(false, "CPU not supported");
+ }
+
+ return idx;
+}
diff --git a/zoo/PCT/pointnet2_ops_lib/pointnet2_ops/_ext-src/src/ball_query_gpu.cu b/zoo/PCT/pointnet2_ops_lib/pointnet2_ops/_ext-src/src/ball_query_gpu.cu
new file mode 100644
index 0000000..559aef9
--- /dev/null
+++ b/zoo/PCT/pointnet2_ops_lib/pointnet2_ops/_ext-src/src/ball_query_gpu.cu
@@ -0,0 +1,54 @@
+#include