Most vendor risk programs started the same way: annual questionnaires sent to vendors, vendors return answers, risk teams file the answers, audit cycles use the filed answers as evidence. The pattern is structurally consistent across most industries. The pattern is also structurally broken.
The core problem: vendors fill out questionnaires. Questionnaires reflect what the vendor says about their security posture, not what their actual external posture demonstrates. The gap between the two is where vendor risk lives.
A few patterns we see consistently across continuous-monitoring deployments against vendor portfolios:
Vendors say they patch promptly
Continuous external scanning shows the actual patch state across externally-visible assets. The two often differ. Critical CVEs flagged by EPSS as actively exploited remain unpatched on vendor infrastructure for weeks or months after the vendor's questionnaire claimed prompt patching.
Vendors say they have strong perimeter security
Continuous scanning surfaces exposed admin panels, default credentials still active on monitoring interfaces, expired SSL certificates, and unmaintained subdomains pointing to abandoned third-party services. The questionnaire didn't ask the right questions; the vendor didn't volunteer the gaps.
Vendors say they monitor their attack surface
Continuous scanning surfaces assets the vendor doesn't have in their own inventory. M&A inheritance, shadow IT, marketing microsites: same patterns as customer-side asset discovery. The vendor's stated monitoring is honest about the inventory they know; the inventory they don't know is invisible to their own monitoring and to questionnaire responses.
Vendors say their incident response is mature
External scanning catches incident indicators on vendor infrastructure days or weeks before the vendor's public disclosure. Sometimes the indicators surface before the vendor internally identifies the incident. The questionnaire timeframe (annual) doesn't intersect with the incident timeframe (real-time).
What changes with continuous monitoring
The point isn't that vendors are dishonest in questionnaires. The point is that questionnaires are a self-report mechanism with a long latency cycle, and self-report mechanisms with long latency cycles produce evidence that doesn't match operational reality.
What continuous external monitoring of vendor portfolios changes:
Latency drops from annual to continuous. Risk events surface as they happen, not at the next questionnaire cycle.
Evidence is observable, not self-reported. Vendor's external posture is what it is, regardless of what the questionnaire says.
Coverage scales without proportional staff. Hundreds of vendors monitored continuously requires the same operational capacity as ten vendors monitored continuously, because the platform handles the scanning. Annual questionnaires across hundreds of vendors require proportional analyst time.
Compliance evidence is exportable on demand. Audit cycles consume evidence; continuous monitoring produces evidence continuously. The evidence-collection sprint that consumes weeks per quarter becomes a runtime export.
What happens to the questionnaire
The questionnaire doesn't disappear in this model. It becomes a confirmatory and contractual layer rather than the primary evidence source. Specific commitments (data handling, incident notification, access controls) are still memorialized contractually. The monitoring layer underneath provides the continuous evidence that the contractual commitments are being honored.
For organizations building or rebuilding vendor risk programs, the question is no longer whether to add continuous monitoring. It's how quickly to make continuous monitoring the primary evidence source and demote the questionnaire to its actual role.