Skip to content

Commit

Permalink
first commit
Browse files Browse the repository at this point in the history
  • Loading branch information
heilaw committed Apr 17, 2019
0 parents commit 0ecad35
Show file tree
Hide file tree
Showing 58 changed files with 4,821 additions and 0 deletions.
18 changes: 18 additions & 0 deletions .gitignore
Original file line number Diff line number Diff line change
@@ -0,0 +1,18 @@
loss/
data/
cache/
tf_cache/
debug/
results/

misc/outputs

evaluation/evaluate_object
evaluation/analyze_object

nnet/__pycache__/

*.swp

*.pyc
*.o*
29 changes: 29 additions & 0 deletions LICENSE
Original file line number Diff line number Diff line change
@@ -0,0 +1,29 @@
BSD 3-Clause License

Copyright (c) 2019, Princeton University
All rights reserved.

Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions are met:

* Redistributions of source code must retain the above copyright notice, this
list of conditions and the following disclaimer.

* Redistributions in binary form must reproduce the above copyright notice,
this list of conditions and the following disclaimer in the documentation
and/or other materials provided with the distribution.

* Neither the name of the copyright holder nor the names of its
contributors may be used to endorse or promote products derived from
this software without specific prior written permission.

THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE
FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,
OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
151 changes: 151 additions & 0 deletions README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,151 @@
# CornerNet-Lite: Training, Evaluation and Testing Code
Code for reproducing results in the following paper:

**CornerNet-Lite: Efficient Keypoint Based Object Detection**
Hei Law, Yun Teng, Olga Russakovsky, Jia Deng
*arXiv*

## Getting Started
### Software Requirement
- Python 3.7
- PyTorch 1.0.0
- CUDA 10
- GCC 4.9.2 or above

### Installing Dependencies
Please first install [Anaconda](https://anaconda.org) and create an Anaconda environment using the provided package list `conda_packagelist.txt`.
```
conda create --name CornerNet_Lite --file conda_packagelist.txt --channel pytorch
```

After you create the environment, please activate it.
```
source activate CornerNet_Lite
```

### Compiling Corner Pooling Layers
Compile the C++ implementation of the corner pooling layers. (GCC4.9.2 or above is required.)
```
cd <CornerNet-Lite dir>/core/models/py_utils/_cpools/
python setup.py install --user
```

### Compiling NMS
Compile the NMS code which are originally from [Faster R-CNN](https://github.com/rbgirshick/py-faster-rcnn/blob/master/lib/nms/cpu_nms.pyx) and [Soft-NMS](https://github.com/bharatsingh430/soft-nms/blob/master/lib/nms/cpu_nms.pyx).
```
cd <CornerNet-Lite dir>/core/external
make
```

### Downloading Models
In this repo, we provide models for the following detectors:
- [CornerNet-Saccade](https://drive.google.com/file/d/1MQDyPRI0HgDHxHToudHqQ-2m8TVBciaa/view?usp=sharing)
- [CornerNet-Squeeze](https://drive.google.com/file/d/1qM8BBYCLUBcZx_UmLT0qMXNTh-Yshp4X/view?usp=sharing)
- [CornerNet](https://drive.google.com/file/d/1e8At_iZWyXQgLlMwHkB83kN-AN85Uff1/view?usp=sharing)

Put the CornerNet-Saccade model under `<CornerNet-Lite dir>/cache/nnet/CornerNet_Saccade/`, CornerNet-Squeeze model under `<CornerNet-Lite dir>/cache/nnet/CornerNet_Squeeze/` and CornerNet model under `<CornerNet-Lite dir>/cache/nnet/CornerNet/`. (\* Note we use underscore instead of dash in both the directory names for CornerNet-Saccade and CornerNet-Squeeze.)

Note: The CornerNet model is the same as the one in the original [CornerNet repo](https://github.com/princeton-vl/CornerNet). We just ported it to this new repo.

After downloading the models, you should be able to use the detectors on your own images. We provide a demo script `demo.py` to test if the repo is installed correctly.
```
python demo.py
```
This script applies CornerNet-Saccade to `demo.jpg` and writes the results to `demo_out.jpg`.

In the demo script, the default detector is CornerNet-Saccade. You can modify the demo script to test different detectors. For example, if you want to test CornerNet-Squeeze:
```python
#!/usr/bin/env python

import cv2
from core.detectors import CornerNet_Squeeze
from core.vis_utils import draw_bboxes

detector = CornerNet_Squeeze()
image = cv2.imread("demo.jpg")

bboxes = detector(image)
image = draw_bboxes(image, bboxes)
cv2.imwrite("demo_out.jpg", image)
```

### Using CornerNet-Lite in Your Project
It is also easy to use CornerNet-Lite in your project. You will need to change the directory name from `CornerNet-Lite` to `CornerNet_Lite`. Otherwise, you won't be able to import CornerNet-Lite.
```
Your project
│ README.md
│ ...
│ foo.py
└───CornerNet_Lite
└───directory1
└───...
```

In `foo.py`, you can easily import CornerNet-Saccade by adding:
```python
from CornerNet_Lite import CornerNet_Saccade

def foo():
cornernet = CornerNet_Saccade()
# CornerNet_Saccade is ready to use

image = cv2.imread('/path/to/your/image')
bboxes = cornernet(image)
```

If you want to train or evaluate the detectors on COCO, please move on to the following steps.

## Training and Evaluation

### Installing MS COCO APIs
```
mkdir -p <CornerNet-Lite dir>/data
cd <CornerNet-Lite dir>/data
git clone [email protected]:cocodataset/cocoapi.git coco
cd <CornerNet-Lite dir>/data/coco/PythonAPI
make install
```

### Downloading MS COCO Data
- Download the training/validation split we use in our paper from [here](https://drive.google.com/file/d/1dop4188xo5lXDkGtOZUzy2SHOD_COXz4/view?usp=sharing) (originally from [Faster R-CNN](https://github.com/rbgirshick/py-faster-rcnn/tree/master/data))
- Unzip the file and place `annotations` under `<CornerNet-Lite dir>/data/coco`
- Download the images (2014 Train, 2014 Val, 2017 Test) from [here](http://cocodataset.org/#download)
- Create 3 directories, `trainval2014`, `minival2014` and `testdev2017`, under `<CornerNet-Lite dir>/data/coco/images/`
- Copy the training/validation/testing images to the corresponding directories according to the annotation files

To train and evaluate a network, you will need to create a configuration file, which defines the hyperparameters, and a model file, which defines the network architecture. The configuration file should be in JSON format and placed in `<CornerNet-Lite dir>/configs/`. Each configuration file should have a corresponding model file in `<CornerNet-Lite dir>/core/models/`. i.e. If there is a `<model>.json` in `<CornerNet-Lite dir>/configs/`, there should be a `<model>.py` in `<CornerNet-Lite dir>/core/models/`. There is only one exception which we will mention later.

### Training and Evaluating a Model
To train a model:
```
python train.py <model>
```

We provide the configuration files and the model files for CornerNet-Saccade, CornerNet-Squeeze and CornerNet in this repo. Please check the configuration files in `<CornerNet-Lite dir>/configs/`.

To train CornerNet-Saccade:
```
python train.py CornerNet_Saccade
```
Please adjust the batch size in `CornerNet_Saccade.json` to accommodate the number of GPUs that are available to you.

To evaluate the trained model:
```
python evaluate.py CornerNet_Saccade --testiter 500000 --split <split>
```

If you want to test different hyperparameters during evaluation and do not want to overwrite the original configuration file, you can do so by creating a configuration file with a suffix (`<model>-<suffix>.json`). There is no need to create `<model>-<suffix>.py` in `<CornerNet-Lite dir>/core/models/`.

To use the new configuration file:
```
python evaluate.py <model> --testiter <iter> --split <split> --suffix <suffix>
```

We also include a configuration file for CornerNet under multi-scale setting, which is `CornerNet-multi_scale.json`, in this repo.

To use the multi-scale configuration file:
```
python evaluate.py CornerNet --testiter <iter> --split <split> --suffix multi_scale
2 changes: 2 additions & 0 deletions __init__.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,2 @@
from .core.detectors import CornerNet, CornerNet_Squeeze, CornerNet_Saccade
from .core.vis_utils import draw_bboxes
81 changes: 81 additions & 0 deletions conda_packagelist.txt
Original file line number Diff line number Diff line change
@@ -0,0 +1,81 @@
# This file may be used to create an environment using:
# $ conda create --name <env> --file <this file>
# platform: linux-64
blas=1.0=mkl
bzip2=1.0.6=h14c3975_5
ca-certificates=2018.12.5=0
cairo=1.14.12=h8948797_3
certifi=2018.11.29=py37_0
cffi=1.11.5=py37he75722e_1
cuda100=1.0=0
cycler=0.10.0=py37_0
cython=0.28.5=py37hf484d3e_0
dbus=1.13.2=h714fa37_1
expat=2.2.6=he6710b0_0
ffmpeg=4.0=hcdf2ecd_0
fontconfig=2.13.0=h9420a91_0
freeglut=3.0.0=hf484d3e_5
freetype=2.9.1=h8a8886c_1
glib=2.56.2=hd408876_0
graphite2=1.3.12=h23475e2_2
gst-plugins-base=1.14.0=hbbd80ab_1
gstreamer=1.14.0=hb453b48_1
harfbuzz=1.8.8=hffaf4a1_0
hdf5=1.10.2=hba1933b_1
icu=58.2=h9c2bf20_1
intel-openmp=2019.0=118
jasper=2.0.14=h07fcdf6_1
jpeg=9b=h024ee3a_2
kiwisolver=1.0.1=py37hf484d3e_0
libedit=3.1.20170329=h6b74fdf_2
libffi=3.2.1=hd88cf55_4
libgcc-ng=8.2.0=hdf63c60_1
libgfortran-ng=7.3.0=hdf63c60_0
libglu=9.0.0=hf484d3e_1
libopencv=3.4.2=hb342d67_1
libopus=1.2.1=hb9ed12e_0
libpng=1.6.35=hbc83047_0
libstdcxx-ng=8.2.0=hdf63c60_1
libtiff=4.0.9=he85c1e1_2
libuuid=1.0.3=h1bed415_2
libvpx=1.7.0=h439df22_0
libxcb=1.13=h1bed415_1
libxml2=2.9.8=h26e45fe_1
matplotlib=3.0.2=py37h5429711_0
mkl=2018.0.3=1
mkl_fft=1.0.6=py37h7dd41cf_0
mkl_random=1.0.1=py37h4414c95_1
ncurses=6.1=hf484d3e_0
ninja=1.8.2=py37h6bb024c_1
numpy=1.15.4=py37h1d66e8a_0
numpy-base=1.15.4=py37h81de0dd_0
olefile=0.46=py37_0
opencv=3.4.2=py37h6fd60c2_1
openssl=1.1.1a=h7b6447c_0
pcre=8.42=h439df22_0
pillow=5.2.0=py37heded4f4_0
pip=10.0.1=py37_0
pixman=0.34.0=hceecf20_3
py-opencv=3.4.2=py37hb342d67_1
pycparser=2.18=py37_1
pyparsing=2.2.0=py37_1
pyqt=5.9.2=py37h05f1152_2
python=3.7.1=h0371630_3
python-dateutil=2.7.3=py37_0
pytorch=1.0.0=py3.7_cuda10.0.130_cudnn7.4.1_1
pytz=2018.5=py37_0
qt=5.9.7=h5867ecd_1
readline=7.0=h7b6447c_5
scikit-learn=0.19.1=py37hedc7406_0
scipy=1.1.0=py37hfa4b5c9_1
setuptools=40.2.0=py37_0
sip=4.19.8=py37hf484d3e_0
six=1.11.0=py37_1
sqlite=3.25.3=h7b6447c_0
tk=8.6.8=hbc83047_0
torchvision=0.2.1=py37_1
tornado=5.1=py37h14c3975_0
tqdm=4.25.0=py37h28b3542_0
wheel=0.31.1=py37_0
xz=5.2.4=h14c3975_4
zlib=1.2.11=ha838bed_2
54 changes: 54 additions & 0 deletions configs/CornerNet-multi_scale.json
Original file line number Diff line number Diff line change
@@ -0,0 +1,54 @@
{
"system": {
"dataset": "COCO",
"batch_size": 49,
"sampling_function": "cornernet",

"train_split": "trainval",
"val_split": "minival",

"learning_rate": 0.00025,
"decay_rate": 10,

"val_iter": 100,

"opt_algo": "adam",
"prefetch_size": 5,

"max_iter": 500000,
"stepsize": 450000,
"snapshot": 5000,

"chunk_sizes": [4, 5, 5, 5, 5, 5, 5, 5, 5, 5],

"data_dir": "./data"
},

"db": {
"rand_scale_min": 0.6,
"rand_scale_max": 1.4,
"rand_scale_step": 0.1,
"rand_scales": null,

"rand_crop": true,
"rand_color": true,

"border": 128,
"gaussian_bump": true,

"input_size": [511, 511],
"output_sizes": [[128, 128]],

"test_scales": [0.5, 0.75, 1, 1.25, 1.5],

"top_k": 100,
"categories": 80,
"ae_threshold": 0.5,
"nms_threshold": 0.5,

"merge_bbox": true,
"weight_exp": 10,

"max_per_image": 100
}
}
52 changes: 52 additions & 0 deletions configs/CornerNet.json
Original file line number Diff line number Diff line change
@@ -0,0 +1,52 @@
{
"system": {
"dataset": "COCO",
"batch_size": 49,
"sampling_function": "cornernet",

"train_split": "trainval",
"val_split": "minival",

"learning_rate": 0.00025,
"decay_rate": 10,

"val_iter": 100,

"opt_algo": "adam",
"prefetch_size": 5,

"max_iter": 500000,
"stepsize": 450000,
"snapshot": 5000,

"chunk_sizes": [4, 5, 5, 5, 5, 5, 5, 5, 5, 5],

"data_dir": "./data"
},

"db": {
"rand_scale_min": 0.6,
"rand_scale_max": 1.4,
"rand_scale_step": 0.1,
"rand_scales": null,

"rand_crop": true,
"rand_color": true,

"border": 128,
"gaussian_bump": true,
"gaussian_iou": 0.3,

"input_size": [511, 511],
"output_sizes": [[128, 128]],

"test_scales": [1],

"top_k": 100,
"categories": 80,
"ae_threshold": 0.5,
"nms_threshold": 0.5,

"max_per_image": 100
}
}
Loading

0 comments on commit 0ecad35

Please sign in to comment.