# Rate limits

Celigo protects shared infrastructure with a leaky-bucket rate limit applied per account. The limits are generous for normal automation but you can and will hit them if you run unthrottled loops.

## The limit

| Resource     | Value                                         |
| ------------ | --------------------------------------------- |
| Bucket size  | **1,000 requests**                            |
| Refill rate  | **300 requests per second**                   |
| Steady-state | ≈ **1,080,000 requests per hour per account** |

Requests consume one token. Bursts up to 1,000 requests are allowed as long as the bucket refills to absorb them.

## What you get back

When the bucket is empty, the API returns:

```http
HTTP/1.1 429 Too Many Requests
Retry-After: 2
Content-Type: application/json

{
  "errors": [
    {
      "code": "rate_limited",
      "message": "Too many requests. Retry after 2 seconds."
    }
  ]
}
```

* `Retry-After` is a whole number of seconds — wait at least that long before your next request.
* Never parse the `message` string; act on the status code and the header.

## Correct back-off pattern

{% tabs %}
{% tab title="Node.js" %}

```javascript
async function celigoFetch(path, init = {}, attempt = 0) {
  const res = await fetch(`${BASE}${path}`, {
    ...init,
    headers: { Authorization: `Bearer ${TOKEN}`, ...init.headers }
  });
  if (res.status === 429 && attempt < 5) {
    const wait = Number(res.headers.get("retry-after") ?? 1) * 1000;
    await new Promise(r => setTimeout(r, wait + Math.random() * 250));
    return celigoFetch(path, init, attempt + 1);
  }
  return res;
}
```

{% endtab %}

{% tab title="Python" %}

```python
import random, time, requests

def celigo_request(method, path, *, attempt=0, **kwargs):
    kwargs.setdefault("headers", {})["Authorization"] = f"Bearer {TOKEN}"
    res = requests.request(method, f"{BASE}{path}", timeout=30, **kwargs)
    if res.status_code == 429 and attempt < 5:
        wait = float(res.headers.get("Retry-After", 1)) + random.random() * 0.25
        time.sleep(wait)
        return celigo_request(method, path, attempt=attempt + 1, **kwargs)
    return res
```

{% endtab %}

{% tab title="cURL" %}

```bash
# curl doesn't have native retry-with-Retry-After, but --retry plus
# --retry-delay gets close. Honor the header manually if throttling is tight.
curl --retry 5 --retry-delay 2 --retry-all-errors \
  -H "Authorization: Bearer $CELIGO_API_TOKEN" \
  "$BASE/v1/integrations"
```

{% endtab %}

{% tab title="Celigo CLI" %}

```bash
# The CLI honors Retry-After automatically with jitter, up to 5 retries.
celigo integrations list
```

{% endtab %}
{% endtabs %}

Key points:

1. **Honor `Retry-After`.** Don't guess; the server tells you what to use.
2. **Add a small jitter** (≤ 250 ms) so parallel workers don't re-collide on the second.
3. **Cap retries.** Five is a reasonable upper bound. If you're still being throttled after five retries, you're overloading the account — throttle the source, don't hammer harder.
4. **Never exponentially retry past `Retry-After`.** Exponential back-off is for 5xx responses. On 429, the server already told you exactly how long to wait.

## How to stay under the limit

* **Batch where the API supports it.** Creating 100 imports in a single `POST` is one request; looping 100 `POST`s is 100 requests.
* **Use `fields=` to shrink pages**, then use bigger `limit=` values. Fewer round-trips for the same data.
* **Cache reads.** Configuration (integrations, flows, connections) rarely changes minute-to-minute. Refresh on TTL, not on every use.
* **Throttle at the source, not at the edge.** If you're syncing an external system into Celigo, rate-limit the source crawl rather than letting it slam the API.

## Separate limits

* **Flow execution** has its own internal rate limits separate from API calls. They're described in [flow concurrency](https://docs.celigo.com/hc/en-us/articles/360043926372). Running thousands of flows concurrently doesn't count against the API bucket — it counts against flow concurrency, which is sized differently.
* **Outbound HTTP connections** (flows hitting external systems) respect rate limits set per-connection (`rateLimit` on the HTTP connection object). Those are your limits to configure.

## Monitoring

* **Audit logs** record every API write. Watch for unexpected spikes by source.
* **Account usage** is viewable in the UI under **Account → API usage**.
* If you need visibility inside a workload, log every 429 and track the latency distribution — a sudden jump in p99 often means you're headed toward being throttled.


---

# Agent Instructions: Querying This Documentation

If you need additional information that is not directly available in this page, you can query the documentation dynamically by asking a question.

Perform an HTTP GET request on the current page URL with the `ask` query parameter:

```
GET https://developer.celigo.com/api/using-the-api/rate-limits.md?ask=<question>
```

The question should be specific, self-contained, and written in natural language.
The response will contain a direct answer to the question and relevant excerpts and sources from the documentation.

Use this mechanism when the answer is not explicitly present in the current page, you need clarification or additional context, or you want to retrieve related documentation sections.
