You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
As part of building reasoning-gym we are evaluating the performance of frontier AI systems on the datasets. This issue relates to the running evaluations for the DeepSeekR1 model. The datasets to run evaluations for can be found in the following sheet google sheet.
For consistency we have used the Open Router provider for all our evaluations and this will be required for this evaluation. Information on latency and quantisation can be found here. Note that you must use the Nebius provider for these evaluations.We have a separate evaluation repository where we store the results of the model evals for each dataset. Please upload results here.
model: deepseek/deepseek-r1provider: Nebiuscategory: algebradatasets:
- example_dataseteval_dir: eval/r1dataset_size: 50dataset_seed: 45developer_role: system
OpenRouter routes requests to providers who host the model. For information see here. Note you must use the Nebius provider for every evaluation. You can specify this in configuration when you run the script
Comparative to all other models accessed via OpenRouter, deepseek r1 is extremely slow (this is true across all providers we preliminary tested). For this reason it might make sense to evaluate a subset of R1 tasks. If you would like to take a subset of these evaluations e.g (algorithmic) feel free to create a sub-issue. You should be able to use the following evaluation script to run your evaluations.
The text was updated successfully, but these errors were encountered:
As part of building reasoning-gym we are evaluating the performance of frontier AI systems on the datasets. This issue relates to the running evaluations for the DeepSeekR1 model. The datasets to run evaluations for can be found in the following sheet google sheet.
For consistency we have used the Open Router provider for all our evaluations and this will be required for this evaluation. Information on latency and quantisation can be found here. Note that you must use the Nebius provider for these evaluations.We have a separate evaluation repository where we store the results of the model evals for each dataset. Please upload results here.
OpenRouter routes requests to providers who host the model. For information see here. Note you must use the
Nebius
provider for every evaluation. You can specify this in configuration when you run the scriptComparative to all other models accessed via OpenRouter, deepseek r1 is extremely slow (this is true across all providers we preliminary tested). For this reason it might make sense to evaluate a subset of R1 tasks. If you would like to take a subset of these evaluations e.g (algorithmic) feel free to create a sub-issue. You should be able to use the following evaluation script to run your evaluations.
The text was updated successfully, but these errors were encountered: