LanteRn: Latent Visual Structured Reasoning
Paper β’ 2603.25629 β’ Published
How to use AGViveiros/LanteRn-3B-RL with Transformers:
# Use a pipeline as a high-level helper
from transformers import pipeline
pipe = pipeline("image-text-to-text", model="AGViveiros/LanteRn-3B-RL")
messages = [
{
"role": "user",
"content": [
{"type": "image", "url": "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/p-blog/candy.JPG"},
{"type": "text", "text": "What animal is on the candy?"}
]
},
]
pipe(text=messages) # Load model directly
from transformers import AutoProcessor, AutoModelForImageTextToText
processor = AutoProcessor.from_pretrained("AGViveiros/LanteRn-3B-RL")
model = AutoModelForImageTextToText.from_pretrained("AGViveiros/LanteRn-3B-RL")
messages = [
{
"role": "user",
"content": [
{"type": "image", "url": "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/p-blog/candy.JPG"},
{"type": "text", "text": "What animal is on the candy?"}
]
},
]
inputs = processor.apply_chat_template(
messages,
add_generation_prompt=True,
tokenize=True,
return_dict=True,
return_tensors="pt",
).to(model.device)
outputs = model.generate(**inputs, max_new_tokens=40)
print(processor.decode(outputs[0][inputs["input_ids"].shape[-1]:]))How to use AGViveiros/LanteRn-3B-RL with vLLM:
# Install vLLM from pip:
pip install vllm
# Start the vLLM server:
vllm serve "AGViveiros/LanteRn-3B-RL"
# Call the server using curl (OpenAI-compatible API):
curl -X POST "http://localhost:8000/v1/chat/completions" \
-H "Content-Type: application/json" \
--data '{
"model": "AGViveiros/LanteRn-3B-RL",
"messages": [
{
"role": "user",
"content": [
{
"type": "text",
"text": "Describe this image in one sentence."
},
{
"type": "image_url",
"image_url": {
"url": "https://cdn.britannica.com/61/93061-050-99147DCE/Statue-of-Liberty-Island-New-York-Bay.jpg"
}
}
]
}
]
}'docker model run hf.co/AGViveiros/LanteRn-3B-RL
How to use AGViveiros/LanteRn-3B-RL with SGLang:
# Install SGLang from pip:
pip install sglang
# Start the SGLang server:
python3 -m sglang.launch_server \
--model-path "AGViveiros/LanteRn-3B-RL" \
--host 0.0.0.0 \
--port 30000
# Call the server using curl (OpenAI-compatible API):
curl -X POST "http://localhost:30000/v1/chat/completions" \
-H "Content-Type: application/json" \
--data '{
"model": "AGViveiros/LanteRn-3B-RL",
"messages": [
{
"role": "user",
"content": [
{
"type": "text",
"text": "Describe this image in one sentence."
},
{
"type": "image_url",
"image_url": {
"url": "https://cdn.britannica.com/61/93061-050-99147DCE/Statue-of-Liberty-Island-New-York-Bay.jpg"
}
}
]
}
]
}'docker run --gpus all \
--shm-size 32g \
-p 30000:30000 \
-v ~/.cache/huggingface:/root/.cache/huggingface \
--env "HF_TOKEN=<secret>" \
--ipc=host \
lmsysorg/sglang:latest \
python3 -m sglang.launch_server \
--model-path "AGViveiros/LanteRn-3B-RL" \
--host 0.0.0.0 \
--port 30000
# Call the server using curl (OpenAI-compatible API):
curl -X POST "http://localhost:30000/v1/chat/completions" \
-H "Content-Type: application/json" \
--data '{
"model": "AGViveiros/LanteRn-3B-RL",
"messages": [
{
"role": "user",
"content": [
{
"type": "text",
"text": "Describe this image in one sentence."
},
{
"type": "image_url",
"image_url": {
"url": "https://cdn.britannica.com/61/93061-050-99147DCE/Statue-of-Liberty-Island-New-York-Bay.jpg"
}
}
]
}
]
}'How to use AGViveiros/LanteRn-3B-RL with Docker Model Runner:
docker model run hf.co/AGViveiros/LanteRn-3B-RL
GRPO reinforcement-learning fine-tune on top of the SFT checkpoint (latent_size=8).
LantErn extends Qwen2.5-VL-3B-Instruct with
Latent Visual Reasoning (LVR) tokens. Instead of always verbalising what it sees, the model can emit
compressed visual embeddings (<|lvr_start|>β¦<|lvr_end|>) during its chain-of-thought, enabling
non-verbalized visual reasoning interleaved with text.
Special tokens added:
| Token | Role |
|---|---|
<lvr_start> |
Begin a latent visual reasoning block |
<lvr_sep> |
Placeholder replaced by compressed visual embeddings (8 tokens) |
<lvr_end> |
End a latent visual reasoning block |
from functools import partial
import torch
from PIL import Image
from transformers import Qwen2_5_VLForConditionalGeneration, AutoProcessor
from qwen_vl_utils import process_vision_info
# ββ 1. Patch the forward to support mixed text/latent modality ββββββββββββββββ
from src.models.qwen2_5VL.forward import qwen2_5_mixed_modality_forward_lantern
import transformers
transformers.models.qwen2_5_vl.modeling_qwen2_5_vl .Qwen2_5_VLForConditionalGeneration.forward = qwen2_5_mixed_modality_forward_lantern
# ββ 2. Load model + processor βββββββββββββββββββββββββββββββββββββββββββββββββ
model = Qwen2_5_VLForConditionalGeneration.from_pretrained(
"AGViveiros/LanteRn-3B-RL", dtype=torch.bfloat16, use_cache=True,
)
processor = AutoProcessor.from_pretrained("AGViveiros/LanteRn-3B-RL")
model.eval().cuda()
# ββ 3. Build inputs βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
image = Image.open("path/to/image.jpg").convert("RGB")
messages = [{
"role": "user",
"content": [
{"type": "image", "image": image},
{"type": "text", "text": "Your question here"},
],
}]
text = processor.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
image_inputs, _ = process_vision_info(messages)
inputs = processor(text=[text], images=image_inputs, return_tensors="pt").to("cuda")
prompt_len = inputs["input_ids"].shape[1]
# ββ 4. Generate with latent visual reasoning ββββββββββββββββββββββββββββββββββ
from src.lantern_generate.generate import generate as lantern_generate
output = model.generate(
**inputs,
max_new_tokens=512,
do_sample=False,
custom_generate=partial(lantern_generate),
use_cache=True,
return_dict_in_generate=True,
)
generated = output.sequences[0][prompt_len:]
print(processor.decode(generated, skip_special_tokens=False))
@article{Viveiros2026LanteRn,
title = {LanteRn: Latent Visual Structured Reasoning},
author = {Viveiros, Andr\'e G. and Gon\c{c}alves, Nuno and Lindemann, Matthias and Martins, Andr\'e},
journal = {arXiv preprint arXiv:2603.25629},
year = {2026},
url = {https://arxiv.org/abs/2603.25629}
}
Base model
Qwen/Qwen2.5-VL-3B-Instruct