Skip to content
/ ADD Public

Adaptive depth-controlled bidirecional diffusion

License

Notifications You must be signed in to change notification settings

Rowerliu/ADD

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

8 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

ADD

Generating Progressive Images from Pathological Transitions via Diffusion Model
Zeyu Liu, Tianyi Zhang, Yufang He, Yu Zhao, Yunlu Feng, Guanglei Zhang
Arxiv, GitHub

🗃️ Overview

Pathological image analysis is a crucial field in deep learning applications. However, training effective models demands large-scale annotated data, which faces challenges due to sampling and annotation scarcity. The rapid developing generative models show potential to generate more training sam-ples in recent studies. However, they also struggle with generalization diver-sity when limited training data is available, making them incapable of gener-ating effective samples. Inspired by pathological transitions between differ-ent stages, we propose an adaptive depth-controlled diffusion (ADD) net-work for effective data augmentation. This novel approach is rooted in do-main migration, where a hybrid attention strategy blends local and global at-tention priorities. With feature measuring, the adaptive depth-controlled strategy guides the bidirectional diffusion. It simulates pathological feature transition and maintains locational similarity. Based on a tiny training set (samples ≤ 500), ADD yields cross-domain progressive images with corre-sponding soft labels. Experiments on two datasets suggest significant im-provements in generation diversity, and the effectiveness of the generated progressive samples is highlighted in downstream classification tasks.

🗃️ Usage

Generating a sequence of intermediate images between source domain and target domain

  1. Train a diffusion model on your data based on the guided-diffusion
  2. Assign the path of trained models, and then generate intermediate images (The total diffusion process includes 1000 steps, and we can get 10 intermediate images)
python scripts/frequency_generating_m_samples.py --diffusion_steps=1000 --amount=10

🗃️ Acknowledgements

This implementation is based on / inspired by:
openai/guided-diffusion
openai/improved-diffusion
suxuann/ddib

🗃️ Enviroments

Same as IDDPM / ADM / DDIB

🗃️ Materials

The comparison methods are listed here:

Model Based method Paper Code
ProGAN GAN Progressive Growing of GANs for Improved Quality, Stability, and Variation Github
IDDPM Diffusion Improved Denoising Diffusion Probabilistic Models Github
LoFGAN GAN LoFGAN: Fusing Local Representations for Few-shot Image Generation Github
MixDL GAN Towards Expert-Level Medical Question Answering with Large Language Models Github

About

Adaptive depth-controlled bidirecional diffusion

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages