Skip to content

On training textual inversion #10104

Answered by linoytsaban
ZhihuaLiuEd asked this question in Q&A
Discussion options

You must be logged in to vote

Hey @ZhihuaLiuEd! with our scripts you could technically train more than one concept in a single train run when enabling pivotal tuning/ textual inversion - provided your image dataset comes with a caption column such that each caption contains unique identifiers assigned for the concept/s present in the image. This way you can pass in --token_abstraction a comma separated string containing the identifiers for each concept e.g/ - "TOK1,TOK2". These identifiers will then be converted to different new tokens whose embeddings will be optimized during training.
Generally multiple concept training tends to be quite tricky to nail, personally I haven't explored much multiple concept training wi…

Replies: 2 comments 2 replies

Comment options

You must be logged in to vote
1 reply
@ZhihuaLiuEd
Comment options

Comment options

You must be logged in to vote
1 reply
@ZhihuaLiuEd
Comment options

Answer selected by ZhihuaLiuEd
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Category
Q&A
Labels
None yet
3 participants