-
Notifications
You must be signed in to change notification settings - Fork 23
CUGAN
AkarinVS edited this page Jan 27, 2022
·
19 revisions
Real-CUGAN is a super-resolution neural network for anime-style arts, based on waifu2x-cunet and trained by bilibili on millions of anime images.
Link:
The models support upscaling by 2x/3x/4x and also denoising.
In order to simplify usage, we provided a Python wrapper module vsmlrt:
from vsmlrt import CUGAN, Backend
src = core.std.BlankClip(format=vs.RGBS)
# backend could be:
# - CPU Backend.OV_CPU(): the recommended CPU backend; generally faster than ORT-CPU.
# - CPU Backend.ORT_CPU(num_streams=1, verbosity=2): vs-ort cpu backend.
# - GPU Backend.ORT_CUDA(device_id=0, cudnn_benchmark=True, num_streams=1, verbosity=2)
# - use device_id to select device
# - set cudnn_benchmark=False to reduce script reload latency when debugging, but with slight throughput performance penalty.
# - GPU Backend.TRT(fp16=True, device_id=0, num_streams=1): TensorRT runtime, the fastest NV GPU runtime.
flt = CUGAN(src, noise=-1, scale=2, backend=Backend.ORT_CUDA())
Due to issues in TensorRT, Turing/Ampere cards can't use the vs-trt backend. vs-ort cuda is fine.
Measurements: FPS / Device Memory (GB)
Device memory:
- CPU: private memory including VapourSynth
- GPU: device memory including context
- Runtimes
- Models
- Device-specific benchmarks
- NVIDIA GeForce RTX 4090
- NVIDIA GeForce RTX 3090
- NVIDIA GeForce RTX 2080 Ti
- NVIDIA Quadro P6000
- AMD Radeon RX 7900 XTX
- AMD Radeon Pro V620
- AMD Radeon Pro V520
- AMD Radeon VII
- AMD EPYC Zen4
- Intel Core Ultra 7 155H
- Intel Arc A380
- Intel Arc A770
- Intel Data Center GPU Flex 170
- Intel Data Center GPU Max 1100
- Intel Xeon Sapphire Rapids