diff --git a/README.md b/README.md index b1055db3..e95c24d0 100644 --- a/README.md +++ b/README.md @@ -7,6 +7,8 @@ https://github.com/CSAILVision/sceneparsing If you simply want to play with our demo, please try this link: http://scenesegmentation.csail.mit.edu You can upload your own photo and parse it! +[You can also use this colab notebook playground here](https://colab.research.google.com/github/CSAILVision/semantic-segmentation-pytorch/blob/master/notebooks/DemoSegmenter.ipynb) to tinker with the code for segmenting an image. + All pretrained models can be found at: http://sceneparsing.csail.mit.edu/model/pytorch diff --git a/notebooks/DemoSegmenter.ipynb b/notebooks/DemoSegmenter.ipynb index e6029d61..62b7aa49 100644 --- a/notebooks/DemoSegmenter.ipynb +++ b/notebooks/DemoSegmenter.ipynb @@ -9,9 +9,9 @@ "This is a notebook for running the benchmark semantic segmentation network from the the [ADE20K MIT Scene Parsing Benchchmark](http://sceneparsing.csail.mit.edu/).\n", "\n", "The code for this notebook is available here\n", - "https://github.com/davidbau/semantic-segmentation-pytorch/tree/tutorial/notebooks\n", + "https://github.com/CSAILVision/semantic-segmentation-pytorch/tree/master/notebooks\n", "\n", - "It can be run on Colab at this URL https://colab.research.google.com/github/davidbau/semantic-segmentation-pytorch/blob/tutorial/notebooks/DemoSegmenter.ipynb" + "It can be run on Colab at this URL https://colab.research.google.com/github/CSAILVision/semantic-segmentation-pytorch/blob/master/notebooks/DemoSegmenter.ipynb" ] }, { @@ -34,8 +34,8 @@ "!(stat -t /usr/local/lib/*/dist-packages/google/colab > /dev/null 2>&1) && exit \n", "pip install yacs 2>&1 >> install.log\n", "git init 2>&1 >> install.log\n", - "git remote add origin https://github.com/davidbau/semantic-segmentation-pytorch.git 2>> install.log\n", - "git pull origin tutorial 2>&1 >> install.log\n", + "git remote add origin https://github.com/CSAILVision/semantic-segmentation-pytorch.git 2>> install.log\n", + "git pull origin master 2>&1 >> install.log\n", "DOWNLOAD_ONLY=1 ./demo_test.sh 2>> install.log" ] }, @@ -157,7 +157,7 @@ "\n", "The segmentation model is coded as a function that takes a dictionary as input, because it wants to know both the input batch image data as well as the desired output segmentation resolution. We ask for full resolution output.\n", "\n", - "Then we use the previously-defined visualize_result function to render the semgnatioon map." + "Then we use the previously-defined visualize_result function to render the segmentation map." ] }, { diff --git a/notebooks/README.md b/notebooks/README.md new file mode 100644 index 00000000..805583b0 --- /dev/null +++ b/notebooks/README.md @@ -0,0 +1,12 @@ +Semantic Segmentation Demo +========================== + +This directory contains a notebook for demonstrating the benchmark +semantic segmentation network from the the ADE20K MIT Scene Parsing +Benchchmark. + +It can be run on Colab at +[this URL](https://colab.research.google.com/github/CSAILVision/semantic-segmentation-pytorch/blob/master/notebooks/DemoSegmenter.ipynb) +or on a local Jupyter notebook. + +If running locally, run the script `setup_notebooks.sh` to start.