Skip to content

Commit

Permalink
UPD: Updated notebook
Browse files Browse the repository at this point in the history
  • Loading branch information
adamshephard committed Dec 3, 2024
2 parents 4f7f577 + ad1563b commit 5739ea1
Show file tree
Hide file tree
Showing 23 changed files with 5,415 additions and 5,351 deletions.
2 changes: 1 addition & 1 deletion .github/workflows/python-package.yml
Original file line number Diff line number Diff line change
Expand Up @@ -30,7 +30,7 @@ jobs:
sudo apt update
sudo apt-get install -y libopenslide-dev openslide-tools libopenjp2-7 libopenjp2-tools
python -m pip install --upgrade pip
python -m pip install ruff==0.7.4 pytest pytest-cov pytest-runner
python -m pip install ruff==0.8.1 pytest pytest-cov pytest-runner
pip install -r requirements/requirements.txt
- name: Cache tiatoolbox static assets
uses: actions/cache@v3
Expand Down
2 changes: 1 addition & 1 deletion .pre-commit-config.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -60,7 +60,7 @@ repos:
- id: rst-inline-touching-normal # Detect mistake of inline code touching normal text in rst.
- repo: https://github.com/astral-sh/ruff-pre-commit
# Ruff version.
rev: v0.7.4
rev: v0.8.1
hooks:
- id: ruff
args: [--fix, --exit-non-zero-on-fix]
Expand Down
4 changes: 2 additions & 2 deletions benchmarks/annotation_store.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -195,7 +195,7 @@
"from typing import TYPE_CHECKING, Any\n",
"\n",
"import numpy as np\n",
"from IPython.display import display\n",
"from IPython.display import display_svg\n",
"from matplotlib import patheffects\n",
"from matplotlib import pyplot as plt\n",
"from shapely import affinity\n",
Expand Down Expand Up @@ -444,7 +444,7 @@
],
"source": [
"for n in range(4):\n",
" display(cell_polygon(xy=(0, 0), n_points=20, repeat_first=False, seed=n))"
" display_svg(cell_polygon(xy=(0, 0), n_points=20, repeat_first=False, seed=n))"
]
},
{
Expand Down
141 changes: 67 additions & 74 deletions examples/05-patch-prediction.ipynb

Large diffs are not rendered by default.

50 changes: 33 additions & 17 deletions examples/06-semantic-segmentation.ipynb

Large diffs are not rendered by default.

132 changes: 88 additions & 44 deletions examples/07-advanced-modeling.ipynb

Large diffs are not rendered by default.

26 changes: 13 additions & 13 deletions examples/08-nucleus-instance-segmentation.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -158,7 +158,7 @@
"source": [
"### GPU or CPU runtime\n",
"\n",
"Processes in this notebook can be accelerated by using a GPU. Therefore, whether you are running this notebook on your system or Colab, you need to check and specify if you are using GPU or CPU hardware acceleration. In Colab, you need to make sure that the runtime type is set to GPU in the *\"Runtime→Change runtime type→Hardware accelerator\"*. If you are *not* using GPU, consider changing the `ON_GPU` flag to `Flase` value, otherwise, some errors will be raised when running the following cells.\n",
"Processes in this notebook can be accelerated by using a GPU. Therefore, whether you are running this notebook on your system or Colab, you need to check and specify appropriate [device](https://pytorch.org/docs/stable/tensor_attributes.html#torch.device) e.g., \"cuda\" or \"cpu\" whether you are using GPU or CPU. In Colab, you need to make sure that the runtime type is set to GPU in the *\"Runtime→Change runtime type→Hardware accelerator\"*. If you are *not* using GPU, consider changing the `device` flag to `cpu` value, otherwise, some errors will be raised when running the following cells.\n",
"\n"
]
},
Expand All @@ -173,8 +173,7 @@
},
"outputs": [],
"source": [
"# Should be changed to False if no cuda-enabled GPU is available.\n",
"ON_GPU = True # Default is True."
"device = \"cuda\" # Choose appropriate device"
]
},
{
Expand Down Expand Up @@ -356,7 +355,7 @@
" [img_file_name],\n",
" save_dir=\"sample_tile_results/\",\n",
" mode=\"tile\",\n",
" on_gpu=ON_GPU,\n",
" device=device,\n",
" crash_on_exception=True,\n",
")"
]
Expand Down Expand Up @@ -386,7 +385,7 @@
"\n",
"- `mode`: the mode of inference which can be set to either `'tile'` or `'wsi'` for plain histology images or structured whole slides images, respectively.\n",
"\n",
"- `on_gpu`: can be `True` or `False` to dictate running the computations on GPU or CPU.\n",
"- `device`: specify appropriate [device](https://pytorch.org/docs/stable/tensor_attributes.html#torch.device) e.g., \"cuda\", \"cuda:0\", \"mps\", \"cpu\" etc.\n",
"\n",
"- `crash_on_exception`: If set to `True`, the running loop will crash if there is an error during processing a WSI. Otherwise, the loop will move on to the next image (wsi) for processing. We suggest that you first make sure that the prediction is working as expected by testing it on a couple of inputs and then set this flag to `False` to process large cohorts of inputs.\n",
"\n",
Expand Down Expand Up @@ -5615,13 +5614,13 @@
")\n",
"\n",
"# WSI prediction\n",
"# if ON_GPU=False, this part will take more than a couple of hours to process.\n",
"# if device=\"cpu\", this part will take more than a couple of hours to process.\n",
"wsi_output = inst_segmentor.predict(\n",
" [wsi_file_name],\n",
" masks=None,\n",
" save_dir=\"sample_wsi_results/\",\n",
" mode=\"wsi\",\n",
" on_gpu=ON_GPU,\n",
" device=device,\n",
" crash_on_exception=True,\n",
")"
]
Expand All @@ -5638,7 +5637,7 @@
"1. Setting `mode='wsi'` in the arguments to `predict` tells the program that the input are in WSI format.\n",
"1. `masks=None`: the `masks` argument to the `predict` function is handled in the same way as the imgs argument. It is a list of paths to the desired image masks. Patches from `imgs` are only processed if they are within a masked area of their corresponding `masks`. If not provided (`masks=None`), then a tissue mask is generated for whole-slide images or, for image tiles, the entire image is processed.\n",
"\n",
"The above code cell might take a while to process, especially if `ON_GPU=False`. The processing time mostly depends on the size of the input WSI.\n",
"The above code cell might take a while to process, especially if `device=\"cpu\"`. The processing time mostly depends on the size of the input WSI.\n",
"The output, `wsi_output`, of `predict` contains a list of paths to the input WSIs and the corresponding output results saved on disk. The results for nucleus instance segmentation in `'wsi'` mode are stored in a Python dictionary, in the same way as was done for `'tile'` mode.\n",
"We use `joblib` to load the outputs for this sample WSI and then inspect the results dictionary.\n",
"\n"
Expand Down Expand Up @@ -5788,11 +5787,12 @@
")\n",
"\n",
"color_dict = {\n",
" 0: (\"neoplastic epithelial\", (255, 0, 0)),\n",
" 1: (\"Inflammatory\", (255, 255, 0)),\n",
" 2: (\"Connective\", (0, 255, 0)),\n",
" 3: (\"Dead\", (0, 0, 0)),\n",
" 4: (\"non-neoplastic epithelial\", (0, 0, 255)),\n",
" 0: (\"background\", (255, 165, 0)),\n",
" 1: (\"neoplastic epithelial\", (255, 0, 0)),\n",
" 2: (\"Inflammatory\", (255, 255, 0)),\n",
" 3: (\"Connective\", (0, 255, 0)),\n",
" 4: (\"Dead\", (0, 0, 0)),\n",
" 5: (\"non-neoplastic epithelial\", (0, 0, 255)),\n",
"}\n",
"\n",
"# Create the overlay image\n",
Expand Down
29 changes: 14 additions & 15 deletions examples/09-multi-task-segmentation.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -105,7 +105,7 @@
},
{
"cell_type": "code",
"execution_count": 2,
"execution_count": null,
"metadata": {
"id": "UEIfjUTaJLPj",
"outputId": "e4f383f2-306d-4afd-cd82-fec14a184941",
Expand Down Expand Up @@ -169,13 +169,13 @@
"source": [
"### GPU or CPU runtime\n",
"\n",
"Processes in this notebook can be accelerated by using a GPU. Therefore, whether you are running this notebook on your system or Colab, you need to check and specify whether you are using GPU or CPU hardware acceleration. In Colab, make sure that the runtime type is set to GPU, using the menu *Runtime→Change runtime type→Hardware accelerator*. If you are *not* using GPU, change the `ON_GPU` flag to `False`.\n",
"Processes in this notebook can be accelerated by using a GPU. Therefore, whether you are running this notebook on your system or Colab, you need to check and specify appropriate [device](https://pytorch.org/docs/stable/tensor_attributes.html#torch.device) e.g., \"cuda\" or \"cpu\" whether you are using GPU or CPU. In Colab, you need to make sure that the runtime type is set to GPU in the *\"Runtime→Change runtime type→Hardware accelerator\"*. If you are *not* using GPU, consider changing the `device` flag to `cpu` value, otherwise, some errors will be raised when running the following cells.\n",
"\n"
]
},
{
"cell_type": "code",
"execution_count": 3,
"execution_count": null,
"metadata": {
"id": "haTA_oQIY1Vy",
"tags": [
Expand All @@ -184,8 +184,7 @@
},
"outputs": [],
"source": [
"# Should be changed to False if no cuda-enabled GPU is available.\n",
"ON_GPU = True # Default is True."
"device = \"cuda\" # Choose appropriate device"
]
},
{
Expand All @@ -205,7 +204,7 @@
},
{
"cell_type": "code",
"execution_count": 4,
"execution_count": null,
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/"
Expand Down Expand Up @@ -260,7 +259,7 @@
},
{
"cell_type": "code",
"execution_count": 6,
"execution_count": null,
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/"
Expand Down Expand Up @@ -335,7 +334,7 @@
},
{
"cell_type": "code",
"execution_count": 7,
"execution_count": null,
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/"
Expand Down Expand Up @@ -390,7 +389,7 @@
" [img_file_name],\n",
" save_dir=global_save_dir / \"sample_tile_results\",\n",
" mode=\"tile\",\n",
" on_gpu=ON_GPU,\n",
" device=device,\n",
" crash_on_exception=True,\n",
")"
]
Expand Down Expand Up @@ -418,7 +417,7 @@
"\n",
"- `mode`: the mode of inference which can be set to either `'tile'` or `'wsi'`, for plain histology images or structured whole slides images, respectively.\n",
"\n",
"- `on_gpu`: can be either `True` or `False` to dictate running the computations on GPU or CPU.\n",
"- `device`: specify appropriate [device](https://pytorch.org/docs/stable/tensor_attributes.html#torch.device) e.g., \"cuda\", \"cuda:0\", \"mps\", \"cpu\" etc.\n",
"\n",
"- `crash_on_exception`: If set to `True`, the running loop will crash if there is an error during processing a WSI. Otherwise, the loop will move on to the next image (wsi) for processing. We suggest that you first make sure that prediction is working as expected by testing it on a couple of inputs and then set this flag to `False` to process large cohorts of inputs.\n",
"\n",
Expand All @@ -430,7 +429,7 @@
},
{
"cell_type": "code",
"execution_count": 8,
"execution_count": null,
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/",
Expand Down Expand Up @@ -546,7 +545,7 @@
},
{
"cell_type": "code",
"execution_count": 10,
"execution_count": null,
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/"
Expand Down Expand Up @@ -595,7 +594,7 @@
" masks=None,\n",
" save_dir=global_save_dir / \"sample_wsi_results/\",\n",
" mode=\"wsi\",\n",
" on_gpu=ON_GPU,\n",
" device=device,\n",
" crash_on_exception=True,\n",
")"
]
Expand All @@ -612,13 +611,13 @@
"1. Setting `mode='wsi'` in the `predict` function indicates that we are predicting region segmentations for inputs in the form of WSIs.\n",
"1. `masks=None` in the `predict` function: the `masks` argument is a list of paths to the desired image masks. Patches from `imgs` are only processed if they are within a masked area of their corresponding `masks`. If not provided (`masks=None`), then either a tissue mask is automatically generated for whole-slide images or the entire image is processed as a collection of image tiles.\n",
"\n",
"The above cell might take a while to process, especially if you have set `ON_GPU=False`. The processing time depends on the size of the input WSI and the selected resolution. Here, we have not specified any values and we use the assumed input resolution (20x) of HoVer-Net+.\n",
"The above cell might take a while to process, especially if you have set `device=\"cpu\"`. The processing time depends on the size of the input WSI and the selected resolution. Here, we have not specified any values and we use the assumed input resolution (20x) of HoVer-Net+.\n",
"\n"
]
},
{
"cell_type": "code",
"execution_count": 12,
"execution_count": null,
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/",
Expand Down
Loading

0 comments on commit 5739ea1

Please sign in to comment.