You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hello, I have a pre-trained model for text sentiment polarity classification, with a structure roughly composed of RoBERTa+TextCNN. Can I use the Introspective Rationale Explainer to interpret its output? I aim to obtain the importance/contribution of each word towards the final predicted polarity.
The text was updated successfully, but these errors were encountered:
@nochimake I would suggest to try Exaplainable AI (XAI), Explainable AI (XAI) aims to make the decision-making processes of machine learning models transparent and interpretable.
Refer this:- https://github.com/explainX/explainx
Through it's lime and shap libraries , it is possible to interpret the decesions of model through visualizations.
You can also use the IRE for that as well, but have a look at the accuracy. In my oinion, XAI has the best one.
Plz let me know, if this helps
Thanks
Hello, I have a pre-trained model for text sentiment polarity classification, with a structure roughly composed of RoBERTa+TextCNN. Can I use the Introspective Rationale Explainer to interpret its output? I aim to obtain the importance/contribution of each word towards the final predicted polarity.
The text was updated successfully, but these errors were encountered: