Add ParseBench evaluation results

#72

This PR ensures your model shows up at https://huggingface.co/datasets/llamaindex/ParseBench.

This is based on the new evaluation results feature: https://huggingface.co/docs/hub/eval-results.

Note: this includes per-dimension performance across all 5 ParseBench dimensions (text_content, text_formatting, layout, chart, table) along with the overall mean score.

Can you provide the prompt used when performing a ParseBench task?

@RENKEYE Please find details here gemma4.py.

@boyang-runllama thank you

What thinking level did you used for these results?

Ready to merge
This branch is ready to get merged automatically.

Sign up or log in to comment