no actually seems the HF router do not find valid inference providers were should be routing to fearless-ai, HF was having issues with nscale as inference provider.
Hi seems nscale, is failing for Qwen/Qwen3-14B, and if I disable nscale to route to fearless ai, it returns a 400 as not provider ready to support api calls