Skip to content

Techniques For Increasing Image Quality Without Buying a Better GPU

ProGamerGov edited this page Jul 8, 2016 · 32 revisions

Single-Image Super-Resolution for Anime-Style Art using Deep Convolutional Neural Networks. And it supports photo.

Linux: https://github.com/nagadomi/waifu2x

Site (Has upload size limit): http://waifu2x.udp.jp/

Windows (Use Chrome's translate feature): http://inatsuka.com/extra/koroshell/

Waifu2x was designed to increase the quality of anime images and to re-size images. Thus depending on the style, the re-sizing and/or noise reducing will work to varying degrees.


IrfanView

"IrfanView has the best sharpening algorithm I've seen. After you resize your image with waifu, open it in irfanview and hit SHIFT+S to sharpen". - Neural-Style User on Reddit.

Works for Windows: http://www.irfanview.com/


NIN Upres

  • content_image: VGG generated image

  • style_image: original style (used to generate VGG version)

  • keep the tv_weight as low as possible (I've used 0.00001)

  • This technique works best for traditional-art styles (like van gogh's starry night) where you can see some grunge/noise. For smooth styles (anime/sharp-edged) you're better off using waifu2x.

Source


Example of using NIN to increase Places205-VGG image size and quality:

  1. First, create an image with neural-style using the Places205-VGG model.

  2. Depending on the GPU resources available to you, either convert the previously created image to a .jpg or leave it as a png.

  3. Run the following command using the same style image that you originally used to create your content image. Make sure your content image is the one you created in step 1:

th neural_style.lua -style_image StyleImage.jpg -content_image ContentImage.jpg -output_image out.png -tv_weight 0.0001 -image_size 2500 -save_iter 50 -content_weight 10 -style_weight 1000 -num_iterations 1000 -model_file models/nin_imagenet_conv.caffemodel -proto_file models/train_val.prototxt -content_layers relu1,relu7,relu12 -style_layers relu1,relu3,relu5,relu7,relu9 -backend cudnn -cudnn_autotune -optimizer adam

Examples/Results:

Tubingen: https://imgur.com/a/ALzL7

Brad Pitt: https://imgur.com/a/Ws8x5

Escher Sphere: https://imgur.com/a/KS1mk

Stanford: https://imgur.com/a/M5jlz

Notes:

This has only been tested with Starry Night and the example images. I used an Amazon g2.2xlarge spot instance which had 4GB of GPU memory. This should work though with any combination of models and settings.

I used the following command to generate the original images:

th neural_style.lua -style_image StyleImage.jpg -content_image ContentImage.jpg -output_image out.png -tv_weight 0.0001 -save_iter 50 -num_iterations 1000 -model_file models/snapshot_iter_765280.caffemodel -proto_file models/deploy_10.prototxt -backend cudnn -cudnn_autotune -optimizer adam

Download the Places205-VGG model here: http://places.csail.mit.edu/model/places205vgg.tar.gz


Adobe InDesign Tiling

  1. Dream a neural-style result image

  2. In Adobe InDesign setup an overlapping grid and then paste the result image into each box.

  3. Set the document size to match your grid box size and make multiple pages for each box

  4. Output each of the pages as their own jpeg image using the export function.

  5. Dream each of the 12 images using the original style image. Best to setup a loop so you don't have to wait around running each one.

  6. Use the grid from step 2 and create a new document the exact size of the whole grid. Drag each of your new result images into each step of the grid and fit photo size to box.

  7. Use Gradient and Basic feather effects to blend the tile edges together

  8. Output final result image. Note - Since my Indesign document was setup at the size of the original result the final image resolution doesn't match the available resolution since each box now contains a higher resolution than the original 72dpi. To compensate I just up the resolution within the output settings and you gain detail in the final output.

Source
Screencap with notes

Hopefully a tiling method that does not require InDesign is found/created soon.

Naming Conventions:

Model Name + Tiled + Optional Different Model Used for Upres + Upres

Examples:

VGG19 Tiled Upres

VGG19 Tiled NIN Upres