You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hello,
I'm currently working on evaluating the performance of OpenAI's models on the Blocksworld dataset using the Tree-of-Thoughts (ToT) approach. However, I encountered an issue when trying to use the get_loglikelihood function within the fast_reward function of the codebase.
Specifically, the line of code causing the problem is:
It seems that the OpenAI models do not support the get_loglikelihood function, which is resulting in an error. I understand that different models might have different capabilities, but I'm wondering if there is an alternative approach or workaround to achieve the same functionality using OpenAI's models.
code in reasoners/lm/openai_model.py :
def get_loglikelihood(
self, prompt: Union[str, list[str]], **kwargs
) -> list[np.ndarray]:
raise NotImplementedError("GPTCompletionModel does not support get_log_prob")
Thank you very much for your time and assistance!
The text was updated successfully, but these errors were encountered:
Thanks for the question! We indeed plan to update some of our examples, making them more general. However, we are shorthanded, and I'm not sure when this can be done. If you are interested, welcome to joining our team and working with us!
Hello,
I'm currently working on evaluating the performance of OpenAI's models on the Blocksworld dataset using the Tree-of-Thoughts (ToT) approach. However, I encountered an issue when trying to use the get_loglikelihood function within the fast_reward function of the codebase.
Specifically, the line of code causing the problem is:
intuition = self.base_model.get_loglikelihood(inputs+ "\n", [inputs + "\n" + action])[0]
It seems that the OpenAI models do not support the get_loglikelihood function, which is resulting in an error. I understand that different models might have different capabilities, but I'm wondering if there is an alternative approach or workaround to achieve the same functionality using OpenAI's models.
code in reasoners/lm/openai_model.py :
def get_loglikelihood(
self, prompt: Union[str, list[str]], **kwargs
) -> list[np.ndarray]:
raise NotImplementedError("GPTCompletionModel does not support get_log_prob")
Thank you very much for your time and assistance!
The text was updated successfully, but these errors were encountered: