Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

So there is no attention block in brushnet right? no self-attn and cross-attn #65

Open
yunniw2001 opened this issue Aug 7, 2024 · 0 comments

Comments

@yunniw2001
Copy link

In the paper, it says "To process the masked image features, BrushNet utilizes a clone of the pre-trained diffusion model while excluding its cross-attention layers. The pretrained weights of the diffusion model serve as a strong prior for extracting the masked image features, while the removal of the cross-attention layers ensures that only pure image information is considered within this additional branch." So I assume that brushnet only keep self-attention block.

But when I check brushnet config file and code, I print brushnet modules, and only see some resnet block, linear, etc. And in brushnet config file, the 2D block it specified is pure conv block(DownBlock2d, Mid, Up).

So the cross attention you remove is not only cross attention layer, but also self attention layer, which is 'CrossAttnDownBlock2D''s crossattn block, right?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant