Instructions to use mlx-community/Apriel-1.5-15b-Thinker-2bit-MLX with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- MLX
How to use mlx-community/Apriel-1.5-15b-Thinker-2bit-MLX with MLX:
# Make sure mlx-vlm is installed # pip install --upgrade mlx-vlm from mlx_vlm import load, generate from mlx_vlm.prompt_utils import apply_chat_template from mlx_vlm.utils import load_config # Load the model model, processor = load("mlx-community/Apriel-1.5-15b-Thinker-2bit-MLX") config = load_config("mlx-community/Apriel-1.5-15b-Thinker-2bit-MLX") # Prepare input image = ["http://images.cocodataset.org/val2017/000000039769.jpg"] prompt = "Describe this image." # Apply chat template formatted_prompt = apply_chat_template( processor, config, prompt, num_images=1 ) # Generate output output = generate(model, processor, formatted_prompt, image) print(output) - Notebooks
- Google Colab
- Kaggle
- Local Apps
- LM Studio
- Pi new
How to use mlx-community/Apriel-1.5-15b-Thinker-2bit-MLX with Pi:
Start the MLX server
# Install MLX LM: uv tool install mlx-lm # Start a local OpenAI-compatible server: mlx_lm.server --model "mlx-community/Apriel-1.5-15b-Thinker-2bit-MLX"
Configure the model in Pi
# Install Pi: npm install -g @mariozechner/pi-coding-agent # Add to ~/.pi/agent/models.json: { "providers": { "mlx-lm": { "baseUrl": "http://localhost:8080/v1", "api": "openai-completions", "apiKey": "none", "models": [ { "id": "mlx-community/Apriel-1.5-15b-Thinker-2bit-MLX" } ] } } }Run Pi
# Start Pi in your project directory: pi
- Hermes Agent new
How to use mlx-community/Apriel-1.5-15b-Thinker-2bit-MLX with Hermes Agent:
Start the MLX server
# Install MLX LM: uv tool install mlx-lm # Start a local OpenAI-compatible server: mlx_lm.server --model "mlx-community/Apriel-1.5-15b-Thinker-2bit-MLX"
Configure Hermes
# Install Hermes: curl -fsSL https://hermes-agent.nousresearch.com/install.sh | bash hermes setup # Point Hermes at the local server: hermes config set model.provider custom hermes config set model.base_url http://127.0.0.1:8080/v1 hermes config set model.default mlx-community/Apriel-1.5-15b-Thinker-2bit-MLX
Run Hermes
hermes
Apriel-1.5-15B-Thinker — MLX 2-bit (Apple Silicon)
Format: MLX (Mac, Apple Silicon)
Quantization: 2-bit (ultra-compact)
Base: ServiceNow-AI/Apriel-1.5-15B-Thinker
Architecture: Pixtral-style LLaVA (vision encoder → 2-layer projector → decoder)
This repository provides a 2-bit MLX build of Apriel-1.5-15B-Thinker for tight-memory Apple-Silicon devices. It prioritizes small footprint and fast load over absolute accuracy. If quality is your primary concern, prefer the 6-bit MLX variant.
🔎 What is Apriel-1.5-15B-Thinker?
Apriel-1.5-15B-Thinker is an open multimodal reasoning model that scales a Pixtral-style VLM with depth upscaling, two-stage multimodal continual pretraining (CPT), and high-quality SFT with explicit reasoning traces (math, coding, science, tool-use). The training recipe focuses on mid-training (no RLHF/RM), delivering strong image-grounded reasoning at modest compute.
This card documents the 2-bit MLX conversion. Expect higher compression and noticeable quality drop vs FP/Int or 6-bit, especially on fine-grained text in images, dense charts, or long-chain reasoning.
📦 What’s in this repo
config.json(MLX config mapped for Pixtral-style VLM)mlx_model*.safetensors(2-bit quantized shards)tokenizer.json,tokenizer_config.jsonprocessor_config.json/image_processor.jsonmodel_index.jsonand metadata
✅ Intended uses
- On-device image understanding where memory is constrained (light captioning, object/layout descriptions)
- Quick triage of screenshots, UI mocks, simple charts, forms with broad structure
- Educational demos of VLMs on Mac w/ minimal RAM budget
⚠️ Limitations
- 2-bit is very lossy. Expect degradation on:
- OCR-heavy tasks, small fonts, dense tables
- Multi-step math/coding with visual grounding
- Long context or many images
- May hallucinate or miss small details. Human review is required for critical use.
🖥️ Apple-Silicon guidance
- Works: M1/M2 (8–16 GB) for short prompts + single image; recommended: M3/M4 for smoother throughput.
- Use GPU:
--device mps
- Downloads last month
- 21
2-bit