You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I downloaded the ICDAR2013 pre-trained file of Weakly supervised learning and made inferences.
However, it showed better performance than the MLT pre-trained data for the Korean dataset.(Even for Korean data not included in ICDAR2013)
Is right the weight file learned from scratch with ICDAR 2013 data?
Thnak you.
The text was updated successfully, but these errors were encountered:
Hello @yeonsikch . The ICDAR13 model is first trained on the SynthText dataset using strong supervision and later fine-tuned using weak supervision on the ICDAR 2013 dataset. As the SynthText dataset has a lot of diversity in the background and font of the text, it is possible that the model generalizes for Korean data as well.
Hi.
Thank you for your successful project.
I downloaded the ICDAR2013 pre-trained file of Weakly supervised learning and made inferences.
However, it showed better performance than the MLT pre-trained data for the Korean dataset.(Even for Korean data not included in ICDAR2013)
Is right the weight file learned from scratch with ICDAR 2013 data?
Thnak you.
The text was updated successfully, but these errors were encountered: