Failed to resolve model lmstudio-community/qwen3-coder-next-gguf,lmstudio-community/qwen3-coder-next-mlx-4bit,lmstudio-community/qwen3-coder-next-mlx-6bit,lmstudio-community/qwen3-coder-next-mlx-8bit: Api error with status 429. URL: https://search.lmstudio.ai/v1/hf-proxy/api/models/lmstudio-community/Qwen3-Coder-Next-GGUF/tree/main?recursive=true&expand=false
If you access the HF server too frequently within a short period of time, you’re likely to encounter a 429 error right away.
To avoid this, when using LM Studio, it seems best to specify the exact location of the target model repository (e.g. lmstudio-community/Qwen3-Coder-Next-GGUF) rather than using vague model family names.
This is usually resolved by treating it as a download-source resolution problem, not a model-runtime problem. In your case, LM Studio is trying to resolve the qwen/qwen3-coder-next catalog entry, and that catalog entry expands into four underlying repos: one GGUF repo and three MLX repos. Hugging Face’s docs say 429 means rate limiting, and that rate limiting applies to Resolvers, which is the bucket used for file-download and related lookup traffic. (LM Studio)
What the error means
A beginner-safe translation is:
- LM Studio is not failing to run the model yet.
- LM Studio is failing earlier, while asking “what files are in this repo and what are the download options?”
- The server answering that question replied 429 Too Many Requests. Hugging Face documents that 429 is the standard rate-limit response, and that their tooling can even read the reset time from the
RateLimitheader before retrying. (Hugging Face)
That is why the message names several repos at once. The qwen3-coder-next entry is a wrapper over these sources:
lmstudio-community/Qwen3-Coder-Next-GGUFlmstudio-community/Qwen3-Coder-Next-MLX-4bitlmstudio-community/Qwen3-Coder-Next-MLX-6bitlmstudio-community/Qwen3-Coder-Next-MLX-8bit(LM Studio)
The most likely cause
The most likely chain is:
- You selected the bundled
qwen/qwen3-coder-nextentry. - LM Studio started resolving the underlying repos.
- One of those resolver lookups got throttled.
- LM Studio aborted the whole resolution step and surfaced the bundle as failed. (LM Studio)
That interpretation also matches other LM Studio bug reports where the failing component is ArtifactResolutionProvider, and the failing endpoint is the same kind of .../tree/main?recursive=true&expand=false repo-tree lookup. (GitHub)
What to do, in order
1. Update LM Studio first
LM Studio’s current changelog shows 0.4.10 Build 1 on April 9, 2026. If you are on an older build, update first, then fully quit and reopen the app. Resolver and downloader behavior has changed a lot across releases, so troubleshooting on an old build wastes time. (LM Studio)
2. Stop retrying the bundled model for a few minutes
Do not keep pressing retry over and over. Hugging Face rate limits apply in fixed windows, and their docs explicitly describe 429 retry handling around reset timing. Repeated retries inside the same limit window often just extend the pain. (Hugging Face)
3. Bypass the bundle and target one exact repo
This is the highest-leverage fix.
LM Studio’s official docs say you can search using an exact user/model string or even paste a full Hugging Face URL directly into the search bar. That lets you bypass the catalog wrapper and go straight to the one source you want. (LM Studio)
Use one of these exact targets:
If you are on Windows or Linux, or you just want the safest general path, use GGUF
lmstudio-community/Qwen3-Coder-Next-GGUF
If you are on an Apple Silicon Mac and specifically want MLX
lmstudio-community/Qwen3-Coder-Next-MLX-4bit
lmstudio-community/Qwen3-Coder-Next-MLX-6bit
lmstudio-community/Qwen3-Coder-Next-MLX-8bit
LM Studio supports GGUF on Mac, Windows, and Linux through llama.cpp, and supports MLX on Apple Silicon Macs. (LM Studio)
4. Start with the smallest practical variant
For the GGUF repo, the current Hugging Face page lists:
Q4_K_Mat 48.5 GBQ6_Kat 65.5 GBQ8_0at 84.8 GB (Hugging Face)
For a first successful download, start with the smallest one you can tolerate. In practice, that usually means the 4-bit GGUF first. That last sentence is advice, but it follows from the file sizes above and LM Studio’s own beginner-facing note that 4-bit or higher is the usual starting range when your machine can handle it. (Hugging Face)
5. Toggle LM Studio’s Hugging Face proxy setting once
LM Studio added an option to use LM Studio’s Hugging Face proxy. Their 0.3.9 release notes say it exists specifically to help users who have trouble reaching Hugging Face directly. There is also a recent issue where missing Download Options were reportedly resolved by turning that proxy on. (LM Studio)
For your case, the practical rule is simple:
- if the proxy is on, try it off
- if it is off, try it on
That recommendation is an inference from the documented proxy feature plus the fact that your error path already includes LM Studio’s proxy route. The point is to test a different network path, not to assume one setting is always better. (LM Studio)
6. If you are on VPN, company Wi-Fi, campus Wi-Fi, or a managed proxy, test another network
LM Studio has an open issue showing that users behind a corporate HTTP proxy can fail to download models or even fetch updates. Shared or managed network paths can make resolver failures much more likely. (GitHub)
A very clean test is:
- turn off VPN
- or switch from work/school network to a phone hotspot or home network
- then retry the exact repo, not the bundled catalog entry (GitHub)
7. Use the CLI as a fallback
LM Studio’s CLI supports lms get, and LM Studio’s docs say full Hugging Face URLs are supported. That gives you a cleaner path than clicking through the bundled Discover entry. (LM Studio)
Examples:
GGUF
lms get https://huggingface.co/lmstudio-community/Qwen3-Coder-Next-GGUF
MLX 4-bit
lms get https://huggingface.co/lmstudio-community/Qwen3-Coder-Next-MLX-4bit
Those commands are supported because LM Studio documents lms get <hugging face url> explicitly. (LM Studio)
8. If LM Studio’s downloader still fails, sideload the model and import it
LM Studio documents lms import for bringing a local model file into LM Studio’s model directory, and their import docs show the expected local model structure. (LM Studio)
Example:
lms import /path/to/model-file.gguf
This is the “I already have the file, stop using the downloader” path. It is especially useful if the issue is only in the resolver/downloader layer. (LM Studio)
Where Hugging Face authentication fits
Hugging Face documents User Access Tokens as the preferred way to authenticate applications to the Hub, and documents HF_TOKEN as the environment variable that overrides the local stored token. (Hugging Face)
That matters, but with one important caveat: LM Studio’s public app docs do not clearly document a dedicated “sign in to Hugging Face here to fix download resolution” flow for this exact app path. So I would treat token/authentication as secondary for your specific error, not the first fix. It is more useful if you are using Hugging Face tools directly or downloading outside LM Studio. (Hugging Face)
What I would do on a real machine
I would do exactly this:
- Update LM Studio to the newest version. (LM Studio)
- Wait a few minutes before retrying. (Hugging Face)
- In Discover, search for
lmstudio-community/Qwen3-Coder-Next-GGUFdirectly instead ofqwen/qwen3-coder-next. (LM Studio) - Try the smallest GGUF option first. (Hugging Face)
- If that still fails, flip the Hugging Face proxy setting and retry the exact repo. (LM Studio)
- If you are on VPN or a company/school network, retry from another network. (GitHub)
- If the GUI still fails, run
lms getwith the full Hugging Face URL. (LM Studio) - If that still fails, download the GGUF file another way and
lms importit. (LM Studio)
How to tell which cause you have
If the exact repo works but the bundled qwen/qwen3-coder-next entry fails, the problem was probably the multi-source resolver step. That conclusion follows from the model page showing the four-source bundle and LM Studio’s support for direct repo search/URL input. (LM Studio)
If changing the proxy setting fixes it, the problem was likely the network path between LM Studio and Hugging Face. LM Studio’s proxy feature and recent proxy-related issue reports support that interpretation. (LM Studio)
If all model searches and downloads fail, not just this one, suspect broader connectivity or proxy issues first. LM Studio’s issue tracker has reports of that exact pattern. (GitHub)
If download succeeds but loading fails later, that is a different problem. LM Studio’s model page lists 42 GB minimum system memory for qwen3-coder-next, and the GGUF files themselves are very large. (LM Studio)
The main takeaway
Your error is most likely not “Qwen3-Coder-Next is broken.” It is much more likely “LM Studio hit a rate-limited file-resolution step while expanding a multi-source model entry.” The simplest fix is to stop hitting the wrapper entry, wait for the rate limit window to clear, and then target the one concrete repo you actually want. (Hugging Face)