Instructions to use answerdotai/ModernBERT-large with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use answerdotai/ModernBERT-large with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("fill-mask", model="answerdotai/ModernBERT-large")# Load model directly from transformers import AutoTokenizer, AutoModelForMaskedLM tokenizer = AutoTokenizer.from_pretrained("answerdotai/ModernBERT-large") model = AutoModelForMaskedLM.from_pretrained("answerdotai/ModernBERT-large") - Notebooks
- Google Colab
- Kaggle
'save_total_limit' not respected
#14 opened 11 months ago
by
enricoburi
update-onnx-model
#13 opened 11 months ago
by
kozistr
MTEB results?
π 1
#11 opened over 1 year ago
by
antonkulaga
Why add_prefix_space=false?
π 4
#5 opened over 1 year ago
by
hankcs
# Fine-tuning ModernBERT on a Large Dataset with Masked Language Modelling
π 5
2
#4 opened over 1 year ago
by
ssmits
Error in subprocess: concurrent.futures.process.BrokenProcessPool: A process in the process pool was terminated abruptly while the future was running or pending.
#3 opened over 1 year ago
by
BwandoWando
plan on multilingual variant?
β 20
3
#2 opened over 1 year ago
by
ahxxm