Who this is for
Decision makers and engineering teams building:
This tutorial shows a production-friendly pattern for integrating IRBIS into an AI-driven reporting platform:
- Your platform creates an enrichment job
- IRBIS returns an async request
id - You poll until results are ready
- You normalize output into your internal schema
- You cache results to control cost/credits
- You store an audit trail (job id ↔ IRBIS request id)
Architecture (recommended)
phone/email/name)enrichment_job recordjob_id immediatelyid + status: progressapi-usage/{id} until readycompleted (or failed)Data model (minimal)
Create a table/document like:
phone|email|namequeued|running|completed|failedGet the right lookupId (cache it)
IRBIS requires a lookupId that matches what your subscription enables.
Call once per tenant (or daily) and cache it:
Store a mapping like:
combined_phone → lookupIdcombined_email → lookupIdcombined_name → lookupId
Submit a lookup request (creates async IRBIS request)
key: your API keyvalue: phone numberlookupId: cached lookupId forcombined_phone
curl -X 'POST' \ 'https://irbis.espysys.com/api/developer/combined_phone' \ -H 'accept: application/json' \ -H 'Content-Type: application/json' \ -d '{ "key": "<API_Key>", "value": "+79017007397", "lookupId": <LOOKUPID_VALUE> }'
You receive a numeric id and status: "progress". Save that numeric id as irbis_request_id.
• Email:
POST /api/developer/combined_email• Name:
POST /api/developer/combined_name
Poll results until ready (api-usage/{id})
curl -X 'GET' \ 'https://irbis.espysys.com/api/request-monitor/api-usage/<RESPONSE_ID>?key=<API_Key>' \ -H 'accept: application/json'
Use a backoff so you don't hammer the API:
| Attempt | Wait before retry |
|---|---|
| attempt 1 | 2s |
| attempt 2 | 5s |
| attempt 3+ | 10s (cap) |
| max attempts | 12–18 (2–3 minutes total) |
Stop when:
- response indicates data is ready (your integration can treat "not progress anymore" as ready)
- or you hit max attempts → mark job failed with "timeout retrieving results"
Normalize IRBIS output into your internal schema
Your platform should not depend on provider-specific JSON forever. Normalize it into a stable schema like:
{
"provider": "irbis",
"input": { "type": "phone", "value": "+79017007397" },
"status": "completed",
"signals": [
{ "type": "identity", "name": "..." },
{ "type": "exposure", "label": "..." },
{ "type": "footprint", "label": "..." }
],
"raw_ref": { "irbis_request_id": 1486 }
}
- Keep raw JSON stored (optional) for debugging/audit
- Convert provider output into your stable
signals[]list - Record provenance: provider name, request id, timestamps, lookup type
Cache results to control credits
Caching is how enrichment platforms win on margin.
tenant_id + input_type + normalized(input_value)Name: higher ambiguity — use shorter TTL to stay accurate.
Credits & guardrails
To show "credits remaining" in your admin UI (or to block heavy workflows), call:
Important limit: "Insufficient enrichment timeout"
IRBIS enforces a 30-second timeout between searches. If you call too fast, you may get:
In production this means:
- don't fire repeated lookups for the same tenant in tight loops
- use queueing + caching
- add a simple "cooldown" per tenant/workflow if needed
Example workflow: "Generate AI report"
Goal: user submits phone/email/name → your platform outputs a structured report.
job_idCommon mistakes (and fixes)
lookupIdlookupid-list and cache mappingjob_id immediately; use background worker