Every registered domain name, in one snapshot.
The complete list of all 400M+ registered domain names across every active TLD. The same dataset that drives Deepinfo's own discovery, exposed as a daily bulk feed. Use it to seed brand-protection workflows, build internet-wide research tooling, or run cross-checks against your asset inventory.
One row per registered domain, across every active TLD.
The All Domain Names Feed is the foundation dataset for anyone working at internet scale. Every domain name that currently resolves to a registered status, in a single bulk file rebuilt every 24 hours.
Each record is the registered domain itself. The feed sources from registry zone files where available, ICANN CZDS for gTLDs, ccTLD operators where partnerships exist, and Deepinfo's own continuous discovery pipeline that fills the gaps. Coverage spans more than 1,500 TLDs including legacy gTLDs, new gTLDs, and the major ccTLDs.
The file is a flat list. For richer per-domain context like Whois, DNS, SSL, or ownership history, use it as the seed for follow-up calls into the lookup and history APIs. The current snapshot reports 400M+ rows; the exact number is in the line_count field of the API response.
Bulk download, refreshed daily.
A single API call returns the metadata for the latest snapshot, including a signed download URL. Authenticate with an API token scoped to the feed.
Delivery
Bulk download. The API call returns metadata plus a signed download_url pointing to the latest snapshot file. Download via HTTPS; existing customers can switch to S3 or SFTP delivery on request.
Format
JSON or CSV. Pass file_format=json or file_format=csv on the request. The response payload reports the format, file size in bytes, line count, and last update timestamp.
Refresh cadence
Daily snapshot. The bulk file is rebuilt every 24 hours, incorporating new registrations, ownership changes, and TLD expansions. Each snapshot is timestamped on file_update_time.
Authentication
API token in the request header. Tokens are scoped per feed; rotate from the dashboard. Full schema and integration examples at docs.deepinfo.com.
What you actually get.
The API response, with the metadata for the current snapshot:
{
"download_url": "https://feeds.deepinfo.com/all-domains/2026-05-02/all-domains.json.gz?...",
"file_format": "json",
"file_size": 8421337842,
"file_update_time": "2026-05-02T03:14:27Z",
"line_count": 412847391
}
A few representative lines from the JSON-formatted file at download_url:
{"domain":"example.com"}
{"domain":"example.net"}
{"domain":"deepinfo.com"}
{"domain":"əəə.tr"}
{"domain":"a-b-c.io"}
Workflows this feed powers directly.
Attack Surface Management
Cross-reference your domains against the global registered set to find brand variants and TLD permutations that resolve.
Read the use caseDomain Intelligence and Research
Pivot from any seed across the full domain corpus by registrar, name server, MX record, or registrant signal.
Read the use caseThreat Hunting
Run hypotheses against the same indexed dataset that drives Deepinfo's own surface scanning.
Read the use case“We pipe the all-domains corpus into our internal threat-intelligence stack daily. Having one canonical bulk file beats stitching together CZDS feeds and partner sources, and the 400M+ row count actually matches what we observe on the wire.”
Pull the dataset, or have us walk you through it.
Most teams start with a sample slice to validate schema and fit. We'll set up token access and walk through integration patterns for your stack.