Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Test.py Input Shape Error #12

Open
PeterVennerstrom opened this issue Jan 7, 2019 · 5 comments
Open

Test.py Input Shape Error #12

PeterVennerstrom opened this issue Jan 7, 2019 · 5 comments

Comments

@PeterVennerstrom
Copy link

PeterVennerstrom commented Jan 7, 2019

With either grayscale or color images test.py is giving an error. E.g. feeding a 184 x 274 grayscale image gives: ValueError: Cannot feed value of shape (1, 184, 274, 1, 3) for Tensor 'input_gray:0', which has shape '(?, ?, ?, 1)'

Thanks for your help!

Edit:

In dataset.py changed line 147 to: img = imread(path, mode='L') from img = imread(path). Now the shape is (1, 184, 274, 1).

This results in a second error:

tensorflow.python.framework.errors_impl.InvalidArgumentError: ConcatOp : Dimensions of inputs should match: shape[0] = [1,512,3,5] vs. shape[1] = [1,512,4,6]

Thanks again!

@TonyZhang1002
Copy link

After changing the line you mentioned in the dataset.py, mine test could run.

But my testing pics are 256*256.

Maybe changing the image scale would work.

@PeterVennerstrom
Copy link
Author

Tried a 256*256 image and it ran.

To train on our own data with a new image size would we write a new class Custom_Model(BaseModel) in models.py which corresponds to the new data set?

Thanks!

@knazeri
Copy link
Member

knazeri commented Jan 9, 2019

@PeterVennerstrom

To train on our own data with a new image size would we write a new class Custom_Model(BaseModel) in models.py which corresponds to the new data set?

No, it's not that. The problem is with u-net architecture using strided convolution for downsampling and odd dimensions. Let's say your input dimension is (184 x 274), using 7 layers of strided convolution here's what you get in the encoder branch:
(184 x 274) -> (92, 137) -> (46, 69) -> (23, 35) -> (12, 18) -> (6, 9) -> (3, 5) -> (2, 3)

In the decoder brach we upsample with factor of 2 and concatenate with their corresponding encoder layer:
(2, 3) -> (4, 6) -> (8, 12) -> (16, 24) -> (32, 48) -> (64, 96) -> (128, 192) -> (256, 384)

And as you can see the decoder branch completely diverges and the reason is that at some point in the encoder branch, one of the dimensions of the output is an odd number!

To prevent that, you can either make sure that your input is a power of 2 (128, 256, 512, ...) or if your input size is fixed, adjust the convolution paddings in the encoder branch, to make sure an output dimension is always an even number!

@liwt31
Copy link

liwt31 commented Jan 10, 2019

Same problem leads to this issue. Probably should add this to the document or add a resize preprocessing procedure on test data?

@PeterVennerstrom
Copy link
Author

Thanks for the clarification. Trained a model on the Kaggle Humpback Whale Identification data using (512 x 256) images.

https://imgur.com/a/nSEaGxa

Great work!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants