Preference datasets trl-lib/hh-rlhf-helpful-base Viewer • Updated Jan 8, 2025 • 46.2k • 939 • 3 trl-lib/lm-human-preferences-descriptiveness Viewer • Updated Jan 8, 2025 • 6.26k • 36 • 1 trl-lib/lm-human-preferences-sentiment Viewer • Updated Jan 8, 2025 • 6.26k • 10 trl-lib/rlaif-v Viewer • Updated Jan 8, 2025 • 83.1k • 475 • 3
Prompt-completion datasets trl-lib/tldr Viewer • Updated Jan 8, 2025 • 130k • 3.25k • 30 trl-lib/OpenMathReasoning Viewer • Updated Apr 26, 2025 • 3.2M • 512
Unpaired preference datasets trl-lib/ultrafeedback-gpt-3.5-turbo-helpfulness Viewer • Updated Jan 8, 2025 • 16.6k • 64 • 4 trl-lib/kto-mix-14k Viewer • Updated Mar 25, 2024 • 15k • 112 • 9
Online-DPO trl-lib/pythia-1b-deduped-tldr-online-dpo 1B • Updated Aug 2, 2024 • 7 trl-lib/pythia-1b-deduped-tldr-sft 1B • Updated Aug 2, 2024 • 226 trl-lib/pythia-6.9b-deduped-tldr-online-dpo 7B • Updated Aug 2, 2024 • 2 trl-lib/pythia-2.8b-deduped-tldr-sft Updated Aug 2, 2024 • 2
Stepwise supervision datasets trl-lib/math_shepherd Viewer • Updated Jan 8, 2025 • 445k • 4.47k • 12 trl-lib/prm800k Viewer • Updated Jan 8, 2025 • 41.2k • 510 • 3
Prompt-only datasets trl-lib/ultrafeedback-prompt Viewer • Updated Jan 8, 2025 • 39.8k • 460 • 9 trl-lib/DeepMath-103K Viewer • Updated Nov 14, 2025 • 103k • 4.61k • 10
Comparing DPO with IPO and KTO A collection of chat models to explore the differences between three alignment techniques: DPO, IPO, and KTO. teknium/OpenHermes-2.5-Mistral-7B Text Generation • Updated Feb 19, 2024 • 114k • 895 Intel/orca_dpo_pairs Viewer • Updated Nov 29, 2023 • 12.9k • 2.04k • 321 trl-lib/OpenHermes-2-Mistral-7B-ipo-beta-0.1-steps-200 Updated Dec 20, 2023 • 2 trl-lib/OpenHermes-2-Mistral-7B-ipo-beta-0.2-steps-200 Updated Dec 20, 2023
Preference datasets trl-lib/hh-rlhf-helpful-base Viewer • Updated Jan 8, 2025 • 46.2k • 939 • 3 trl-lib/lm-human-preferences-descriptiveness Viewer • Updated Jan 8, 2025 • 6.26k • 36 • 1 trl-lib/lm-human-preferences-sentiment Viewer • Updated Jan 8, 2025 • 6.26k • 10 trl-lib/rlaif-v Viewer • Updated Jan 8, 2025 • 83.1k • 475 • 3
Stepwise supervision datasets trl-lib/math_shepherd Viewer • Updated Jan 8, 2025 • 445k • 4.47k • 12 trl-lib/prm800k Viewer • Updated Jan 8, 2025 • 41.2k • 510 • 3
Prompt-completion datasets trl-lib/tldr Viewer • Updated Jan 8, 2025 • 130k • 3.25k • 30 trl-lib/OpenMathReasoning Viewer • Updated Apr 26, 2025 • 3.2M • 512
Prompt-only datasets trl-lib/ultrafeedback-prompt Viewer • Updated Jan 8, 2025 • 39.8k • 460 • 9 trl-lib/DeepMath-103K Viewer • Updated Nov 14, 2025 • 103k • 4.61k • 10
Unpaired preference datasets trl-lib/ultrafeedback-gpt-3.5-turbo-helpfulness Viewer • Updated Jan 8, 2025 • 16.6k • 64 • 4 trl-lib/kto-mix-14k Viewer • Updated Mar 25, 2024 • 15k • 112 • 9
Comparing DPO with IPO and KTO A collection of chat models to explore the differences between three alignment techniques: DPO, IPO, and KTO. teknium/OpenHermes-2.5-Mistral-7B Text Generation • Updated Feb 19, 2024 • 114k • 895 Intel/orca_dpo_pairs Viewer • Updated Nov 29, 2023 • 12.9k • 2.04k • 321 trl-lib/OpenHermes-2-Mistral-7B-ipo-beta-0.1-steps-200 Updated Dec 20, 2023 • 2 trl-lib/OpenHermes-2-Mistral-7B-ipo-beta-0.2-steps-200 Updated Dec 20, 2023
Online-DPO trl-lib/pythia-1b-deduped-tldr-online-dpo 1B • Updated Aug 2, 2024 • 7 trl-lib/pythia-1b-deduped-tldr-sft 1B • Updated Aug 2, 2024 • 226 trl-lib/pythia-6.9b-deduped-tldr-online-dpo 7B • Updated Aug 2, 2024 • 2 trl-lib/pythia-2.8b-deduped-tldr-sft Updated Aug 2, 2024 • 2