Elevated 5xx Errors & Timeouts for Instant Learning (LLM Nano)

Incident Report for Nanonets

Resolved

This incident has been resolved.
Posted Apr 28, 2026 - 06:02 UTC

Monitoring

We’ve temporarily routed LLM Nano traffic to LLM Mini, a more stable, accurate and higher-capacity variant, to mitigate errors.

File processing should now be faster and more reliable while we continue working on resolving the underlying issue.
Posted Apr 28, 2026 - 05:37 UTC

Investigating

We are currently investigating an issue affecting our GPU service provider infrastructure, which is causing elevated 5xx errors and timeouts for Instant Learning models using LLM Nano.

Our team is actively working with the provider to identify and resolve the underlying network issues. We will share further updates as soon as we have more information.
Posted Apr 28, 2026 - 05:16 UTC
This incident affected: API.