Release the 124B parent weights... We know you have it.
#31 opened about 10 hours ago
by
Dureka
tfhe_ntt::prime32::Plan::try_new
#30 opened 2 days ago
by
milezdeep13
Add ParseBench evaluation results
#29 opened 4 days ago
by
boyang-runllama
Thinking Mode doesn't work properly on gemma-4-26B-A4B-it.
#27 opened 5 days ago
by
michaelkopf1981
Fix missing thinking channel in Gemma 4 chat template when using continue_final_message
#26 opened 6 days ago
by
CalinR
Your 260k dictionary is breaking Gemma 4's back.
8
#25 opened 7 days ago
by
phil111
fix: embed chat_template in tokenizer_config.json
#24 opened 8 days ago
by
NERDDISCO
fix: function calling formatting in chat template
👍 1
4
#20 opened 12 days ago
by
RyanMullins
Vertex AI & vLLM Deployment Guide for Gemma 4 26B-A4B-it (MoE) + Known Limitations
🚀❤️ 3
1
#19 opened 12 days ago
by
Manzela-D
Thank you Google!
2
#18 opened 13 days ago
by
KngRnZ
[Appreciation] Incredible performance of Gemma 4-26b on consumer hardware — 90 t/s even on an older DDR3 system!
2
#17 opened 13 days ago
by
MightyLoraLord
Excellent release, Google. Gemma 4 is good.
❤️ 3
2
#16 opened 14 days ago
by
DorkMckork1
Fantastic release!
👍 7
4
#15 opened 15 days ago
by
Dampfinchen
Significant Otter! ❤
🔥 3
2
#14 opened 17 days ago
by
MrDevolver
THANK YOU! Google
❤️👍 7
1
#13 opened 17 days ago
by
E7Reine
Are you guys going to add other MoE stuff?
🤗 3
2
#11 opened 17 days ago
by
Nesy1
Verified Commit?
2
#9 opened 18 days ago
by
stephenrawls
please fp8
2
#8 opened 18 days ago
by
huang123chuan
First community NVFP4 quantization of Gemma 4 26B-A4B-it (49GB → 16.5GB)
👍 2
2
#7 opened 18 days ago
by
marioiseli
Add AIME 2026 evaluation result
#5 opened 19 days ago
by
SaylorTwift
add eval results
#3 opened 19 days ago
by
merve
error when batch size >1
#1 opened about 1 month ago
by
loulou2