-
Notifications
You must be signed in to change notification settings - Fork 0
/
Copy pathaction_log.txt
30 lines (25 loc) · 2.63 KB
/
action_log.txt
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
1. Created a simple notebook to debug the data reading in MONAI
/mnt/data/mranzini/Desktop/GIFT-Surg/FBS_Monai/basic_unet_monai/notebooks/data_reader_debug.ipynb
This extracts the filenames from the splitting lists used for Retraining_with_expanded dataset and then tests the application of Resizing + Random Rotation and Random Flipping as Data Augmentation
2. Prepared a basic unet_training.pycode
/mnt/data/mranzini/Desktop/GIFT-Surg/FBS_Monai/basic_unet_monai/python_code/unet_training.py
This is based on the 3D unet segmentation examples available in MONAI. Changes had to be made to be able to extract 2D patches from 3D images (e.g. creation of a Squeeze transform, modification of sliding window inference). Originally, the training loop was defined as plain pytorch, as it didn't seem possible to perform whole volume validation with ignite. I actually was wrong, it is possible, so eventually it would be good to refactor the code with ignite --> see segmentation_3D_ignite/unet_evaluation_dict.py
3. Defined a training config file with basic inputs
/mnt/data/mranzini/Desktop/GIFT-Surg/FBS_Monai/basic_unet_monai/config/basic_unet_config.yml
4. Ran training (using monai environment)
NOTE: I uninstalled monai (pip uninstall monai) as I want to link the source code myself so I can keep it updated instead of waiting for the pip install release to be updated.
This is achieved by adding:
sys.path.append("/mnt/data/mranzini/Desktop/GIFT-Surg/FBS_Monai/MONAI")
To run the training:
python /mnt/data/mranzini/Desktop/GIFT-Surg/FBS_Monai/basic_unet_monai/python_code/unet_training.py --config /mnt/data/mranzini/Desktop/GIFT-Surg/FBS_Monai/basic_unet_monai/config/basic_unet_config.yml
5. Prepared inference scripts:
5a. Generated the lists for inference using:
/mnt/data/mranzini/Desktop/GIFT-Surg/Retraining_with_expanded_dataset/bash/01b_list_inference_files.sh
5b. Prepared the inference code
python /mnt/data/mranzini/Desktop/GIFT-Surg/FBS_Monai/basic_unet_monai/python_code/unet_testing.py
It pretty much follows the validation code, but I had to apply the resize within the inference loop, instead of initial transform, otherwise it is not possible to map the images back to their original size
5c. Added the same post-processing as Guotai's network (i.e. binary closing and get largest component)
5d. Prepared the config file for inference
/mnt/data/mranzini/Desktop/GIFT-Surg/FBS_Monai/basic_unet_monai/config/basic_unet_config_inference.yml
6. Ran inference
python /mnt/data/mranzini/Desktop/GIFT-Surg/FBS_Monai/basic_unet_monai/python_code/unet_testing.py --config /mnt/data/mranzini/Desktop/GIFT-Surg/FBS_Monai/basic_unet_monai/config/basic_unet_inference_config.yml