Instructions to use RWKV/RWKV7-Goose-World3-1.5B-HF with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use RWKV/RWKV7-Goose-World3-1.5B-HF with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="RWKV/RWKV7-Goose-World3-1.5B-HF", trust_remote_code=True) messages = [ {"role": "user", "content": "Who are you?"}, ] pipe(messages)# Load model directly from transformers import AutoModelForCausalLM model = AutoModelForCausalLM.from_pretrained("RWKV/RWKV7-Goose-World3-1.5B-HF", trust_remote_code=True, dtype="auto") - Notebooks
- Google Colab
- Kaggle
- Local Apps
- vLLM
How to use RWKV/RWKV7-Goose-World3-1.5B-HF with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "RWKV/RWKV7-Goose-World3-1.5B-HF" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "RWKV/RWKV7-Goose-World3-1.5B-HF", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker
docker model run hf.co/RWKV/RWKV7-Goose-World3-1.5B-HF
- SGLang
How to use RWKV/RWKV7-Goose-World3-1.5B-HF with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "RWKV/RWKV7-Goose-World3-1.5B-HF" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "RWKV/RWKV7-Goose-World3-1.5B-HF", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "RWKV/RWKV7-Goose-World3-1.5B-HF" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "RWKV/RWKV7-Goose-World3-1.5B-HF", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }' - Docker Model Runner
How to use RWKV/RWKV7-Goose-World3-1.5B-HF with Docker Model Runner:
docker model run hf.co/RWKV/RWKV7-Goose-World3-1.5B-HF
rwkv7-1.5B-world
This is RWKV-7 model under flash-linear attention format.
Model Details
Model Description
- Developed by: Bo Peng, Yu Zhang, Songlin Yang, Ruichong Zhang
- Funded by: RWKV Project (Under LF AI & Data Foundation)
- Model type: RWKV7
- Language(s) (NLP): English, Chinese, Japanese, Korean, French, Arabic, Spanish, Portuguese
- License: Apache-2.0
- Parameter count: 1.52B
- Tokenizer: RWKV World tokenizer
- Vocabulary size: 65,536
Model Sources
- Repository: https://github.com/fla-org/flash-linear-attention ; https://github.com/BlinkDL/RWKV-LM
- Paper: https://huggingface.co/papers/2503.14456
Uses
Install flash-linear-attention and the latest version of transformers before using this model:
pip install flash-linear-attention==0.3.0
pip install 'transformers>=4.48.0'
Direct Use
You can use this model just as any other HuggingFace models:
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained('fla-hub/rwkv7-1.5B-world', trust_remote_code=True)
tokenizer = AutoTokenizer.from_pretrained('fla-hub/rwkv7-1.5B-world', trust_remote_code=True)
model = model.cuda() # Supported on Nvidia/AMD/Intel eg. model.xpu()
prompt = "What is a large language model?"
messages = [
{"role": "user", "content": prompt}
]
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
generated_ids = model.generate(
**model_inputs,
max_new_tokens=4096,
do_sample=True,
temperature=1.0,
top_p=0.3,
repetition_penalty=1.2
)
generated_ids = [
output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
]
response = tokenizer.batch_decode(generated_ids, skip_special_tokens=False)[0]
print(response)
Training Details
Training Data
This model is trained on the World v3 with a total of 3.119 trillion tokens.
Training Hyperparameters
- Training regime: bfloat16, lr 4e-4 to 1e-5 "delayed" cosine decay, wd 0.1 (with increasing batch sizes during the middle)
- Final Loss: 1.9965
- Token Count: 3.119 trillion
Evaluation
Metrics
lambada_openai:
before conversion: ppl 4.13 acc 69.4%
after conversion: ppl 4.26 acc 68.8% (without apply temple)
FAQ
Q: safetensors metadata is none.
A: upgrade transformers to >=4.48.0: pip install 'transformers>=4.48.0'
- Downloads last month
- 361
Model tree for RWKV/RWKV7-Goose-World3-1.5B-HF
Collection including RWKV/RWKV7-Goose-World3-1.5B-HF
Collection
9 items • Updated • 12