LLaMAX
Collection
9 items • Updated • 2
How to use LLaMAX/LLaMAX2-7B-XNLI with Transformers:
# Use a pipeline as a high-level helper
from transformers import pipeline
pipe = pipeline("text-generation", model="LLaMAX/LLaMAX2-7B-XNLI") # Load model directly
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("LLaMAX/LLaMAX2-7B-XNLI")
model = AutoModelForCausalLM.from_pretrained("LLaMAX/LLaMAX2-7B-XNLI")How to use LLaMAX/LLaMAX2-7B-XNLI with vLLM:
# Install vLLM from pip:
pip install vllm
# Start the vLLM server:
vllm serve "LLaMAX/LLaMAX2-7B-XNLI"
# Call the server using curl (OpenAI-compatible API):
curl -X POST "http://localhost:8000/v1/completions" \
-H "Content-Type: application/json" \
--data '{
"model": "LLaMAX/LLaMAX2-7B-XNLI",
"prompt": "Once upon a time,",
"max_tokens": 512,
"temperature": 0.5
}'docker model run hf.co/LLaMAX/LLaMAX2-7B-XNLI
How to use LLaMAX/LLaMAX2-7B-XNLI with SGLang:
# Install SGLang from pip:
pip install sglang
# Start the SGLang server:
python3 -m sglang.launch_server \
--model-path "LLaMAX/LLaMAX2-7B-XNLI" \
--host 0.0.0.0 \
--port 30000
# Call the server using curl (OpenAI-compatible API):
curl -X POST "http://localhost:30000/v1/completions" \
-H "Content-Type: application/json" \
--data '{
"model": "LLaMAX/LLaMAX2-7B-XNLI",
"prompt": "Once upon a time,",
"max_tokens": 512,
"temperature": 0.5
}'docker run --gpus all \
--shm-size 32g \
-p 30000:30000 \
-v ~/.cache/huggingface:/root/.cache/huggingface \
--env "HF_TOKEN=<secret>" \
--ipc=host \
lmsysorg/sglang:latest \
python3 -m sglang.launch_server \
--model-path "LLaMAX/LLaMAX2-7B-XNLI" \
--host 0.0.0.0 \
--port 30000
# Call the server using curl (OpenAI-compatible API):
curl -X POST "http://localhost:30000/v1/completions" \
-H "Content-Type: application/json" \
--data '{
"model": "LLaMAX/LLaMAX2-7B-XNLI",
"prompt": "Once upon a time,",
"max_tokens": 512,
"temperature": 0.5
}'How to use LLaMAX/LLaMAX2-7B-XNLI with Docker Model Runner:
docker model run hf.co/LLaMAX/LLaMAX2-7B-XNLI
🔥 LLaMAX-7B-X-NLI is a NLI model with multilingual capability, which is fully fine-tuned the powerful multilingual model LLaMAX-7B on MultiNLI dataset.
🔥 Compared with fine-tuning Llama-2 on the same setting, LLaMAX-7B-X-CSQA improves the average accuracy up to 5.6% on the XNLI dataset.
| XNLI | Avg. | Sw | Ur | Hi | Th | Ar | Tr | El | Vi | Zh | Ru | Bg | De | Fr | Es | En |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| Llama2-7B-X-XNLI | 70.6 | 44.6 | 55.1 | 62.2 | 58.4 | 64.7 | 64.9 | 65.6 | 75.4 | 75.9 | 78.9 | 78.6 | 80.7 | 81.7 | 83.1 | 89.5 |
| LLaMAX-7B-X-XNLI | 76.2 | 66.7 | 65.3 | 69.1 | 66.2 | 73.6 | 71.8 | 74.3 | 77.4 | 78.3 | 80.3 | 81.6 | 82.2 | 83.0 | 84.1 | 89.7 |
Code Example:
from transformers import AutoTokenizer, LlamaForCausalLM
model = LlamaForCausalLM.from_pretrained(PATH_TO_CONVERTED_WEIGHTS)
tokenizer = AutoTokenizer.from_pretrained(PATH_TO_CONVERTED_TOKENIZER)
query = "Premise: She doesn’t really understand. Hypothesis: Actually, she doesn’t get it. Label:"
inputs = tokenizer(query, return_tensors="pt")
generate_ids = model.generate(inputs.input_ids, max_length=30)
tokenizer.batch_decode(generate_ids, skip_special_tokens=True, clean_up_tokenization_spaces=False)[0]
# => Entailment
if our model helps your work, please cite this paper:
@inproceedings{lu-etal-2024-llamax,
title = "{LL}a{MAX}: Scaling Linguistic Horizons of {LLM} by Enhancing Translation Capabilities Beyond 100 Languages",
author = "Lu, Yinquan and
Zhu, Wenhao and
Li, Lei and
Qiao, Yu and
Yuan, Fei",
editor = "Al-Onaizan, Yaser and
Bansal, Mohit and
Chen, Yun-Nung",
booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2024",
month = nov,
year = "2024",
address = "Miami, Florida, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.findings-emnlp.631",
doi = "10.18653/v1/2024.findings-emnlp.631",
pages = "10748--10772",
abstract = "Large Language Models (LLMs) demonstrate remarkable translation capabilities in high-resource language tasks, yet their performance in low-resource languages is hindered by insufficient multilingual data during pre-training. To address this, we conduct extensive multilingual continual pre-training on the LLaMA series models, enabling translation support across more than 100 languages. Through a comprehensive analysis of training strategies, such as vocabulary expansion and data augmentation, we develop LLaMAX. Remarkably, without sacrificing its generalization ability, LLaMAX achieves significantly higher translation performance compared to existing open-source LLMs (by more than 10 spBLEU points) and performs on-par with specialized translation model (M2M-100-12B) on the Flores-101 benchmark. Extensive experiments indicate that LLaMAX can serve as a robust multilingual foundation model. The code and the models are publicly available.",
}