-
Notifications
You must be signed in to change notification settings - Fork 18
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
training with disconnected nodes to head #400
Comments
This step is not clear to me:
Could you elaborate? Or give example? |
I have |
Edge between nodes: The edges between nodes would give ALL the instances, not the relevant subset. Am I misunderstanding? Relevant to #322 |
Edges will give all the connected instances. But we need to think about it and make it well defined in every situation. I am talking about using the examples that are not necessarily connected to a head object but still can serve as examples for the single classifiers. For example in CoNLL relations are not a good starting point for deriving the CoNLLToken examples. We miss many of those if we just use the edges from relations to tokens. |
👍 Agree with need to simplification. We can still have |
For the moment when training a set of classifiers, we use the most global object i.e. head and the other objects (of the destination nodes) are derived from the head. This makes us lose all the examples that are not connected to a head object and brings the performance of the single classifiers down in the jointLearning setting. One way to fix this is that to use the edge from head node to the specific destination node that we need and then use all the instances of the destination node. This is versus the current implementation in which we get the instances of the head node first and then go from each head instance to a the connected instances of the destination node.
The text was updated successfully, but these errors were encountered: