Depending on the model, it seems that this can be avoided by the following methods.
John6666
5
Related topics
| Topic | Replies | Views | Activity | |
|---|---|---|---|---|
| HfHubHTTPError: 503 server error | 1 | 972 | April 17, 2025 | |
| Inference API returns 504 error for Llama-3.2-3B-Instruct & google/gemma-2-2b-it | 3 | 67 | April 21, 2025 | |
| Getting "502 Server Error: Bad Gateway for url: https://huggingface.co/proxy/api-inference.huggingface.co/models/meta-llama/Llama-3.2-3B-Instruct" error | 8 | 601 | April 28, 2025 | |
| Not able to access meta-llama/Llama-3.2-3B-Instruct | 3 | 432 | April 25, 2025 | |
| Help using inference endpoint with Llama 3.1 405B Instruct | 1 | 293 | August 30, 2024 |