Rate Limiting

Thallus enforces rate limits at multiple levels to protect the platform and ensure fair usage. When you exceed a limit, the API returns a 429 Too Many Requests response with a Retry-After header indicating how long to wait.


HTTP rate limits

API requests are rate-limited by IP address. Authentication endpoints have stricter limits than general API endpoints. Internal health check and monitoring paths are exempt from rate limiting.

Request
Rate Check
Under Limit?
Process

If the rate check fails, the request is immediately rejected with a 429 response instead of being queued.

429 response format

429 RESPONSE
Retry-After: 60
{"detail": "Too many requests. Please try again later."}

The Retry-After header is in seconds. For general rate limits, this is always 60. For login lockouts, it reflects the actual remaining lockout time.


Login lockout

Failed login attempts trigger progressive account lockout to protect against brute force attacks. The lockout counter resets after a successful login. The Retry-After header in the 429 response reflects the actual seconds remaining until the lockout expires, so your client can display an accurate countdown.

Admins can unlock accounts manually via User Management.


Data query limits

Database queries have per-user, per-organization, and per-connection rate limits to prevent abuse of external database resources. These limits apply to queries run by data agents as well as direct query execution via the API. The concurrency limit ensures no single connection monopolizes database resources.


Workflow execution limits

Workflow execution rates are tied to your billing plan. Each organization has a maximum number of workflow executions per billing period.

Check your current usage via the API:

GET /api/v1/workflows/usage

This returns your current execution count, limit, and reset date. When the limit is reached, new workflow executions are rejected until the next billing period. See Billing & Plans for plan-specific limits.


SSE connection limits

Server-Sent Events (SSE) connections for progress streaming are limited to 5 concurrent connections per user. If you open a 6th connection, it is rejected with a 429 response.

This limit prevents resource exhaustion from abandoned browser tabs. Close unused connections or use the Streaming & SSE recovery endpoints to resume from a single connection.


Client guidance

Respect the Retry-After header

Always read the Retry-After header from 429 responses and wait the specified number of seconds before retrying. Do not retry immediately — rapid retries extend the rate limit window.

Use exponential backoff

For transient errors (429 and 5xx responses), use exponential backoff with jitter:

delay = min(base_delay * 2^attempt, max_delay) + random_jitter

A reasonable starting point is a 1-second base delay with a 30-second maximum.

Avoid polling loops

Instead of polling for results, use SSE streaming to receive progress updates in real time. See Streaming & SSE for details. This reduces your request count significantly.

Batch operations

When managing multiple resources, prefer bulk endpoints where available rather than issuing many individual requests.