There is a sentence that appears in nearly every annual pentest report I have read in the last five years. It sits near the front, usually in italics, and it reads something like:

“This report reflects the security posture of the in-scope systems at the time of testing. Subsequent changes may alter the findings.”

That sentence is the entire problem. It is also why the regulator and the board keep treating the report as a statement of assurance, and why the engineering team knows it is not one.

Here are the numbers. Horizon3 found that over 40% of CISOs say their pentest results are invalid by the time the report is delivered, because the environment has already moved on. Pentera’s 2025 State of Pentesting reports that 50% of CISOs now cite software-based pentesting as their primary method for uncovering exploitable gaps. The penetration testing market is growing at roughly 12-16% CAGR, and the segment within it that is growing fastest is continuous, not annual.

The industry has already moved. The question is whether your assurance process has.

Why annual pentests made sense (and why they stopped)

Annual testing is a control designed for a release cadence that no longer exists in most of the businesses I work with. PCI DSS and similar frameworks encoded annual testing in an era when software was released quarterly, infrastructure changed by ticket, and new dependencies required a change advisory board meeting.

In that world, a report dated 15 March reasonably described the state of the system on 15 September. Things had changed, but not by much, and the change had been tightly controlled.

That world is gone for most of the fintech, SaaS, and cloud-native businesses I advise. The typical client now deploys code weekly, some daily. A single developer with production access can introduce a new S3 bucket, a new IAM role, a new third-party dependency, or a new public endpoint between coffee and lunch. None of these changes will show up in a pentest report written in March.

The shelf life of a pentest in this environment is not twelve months. It is measured in deployments.

What changes between pentests that pentests do not see

To make this concrete, consider what happens in a reasonably active engineering team in the six months after a pentest completes:

  • Fifty to two hundred deployments to production, each one a potential new attack surface.
  • Ten to thirty new dependencies pulled into the build: libraries, SDKs, MCP servers, AI agent frameworks, each with its own dependency tree.
  • A handful of new endpoints, some intended to be public, some not.
  • IAM policy drift. The policies that were clean on test day have accumulated permissions, usually for reasons that made sense at the time.
  • New service accounts created for integrations, and old service accounts that were supposed to be decommissioned but were not.
  • A cloud misconfiguration or two. S3 buckets made public for an analytics integration. Security groups opened for a one-off debug session and never closed.
  • At least one employee with significant production access who has left the company.

Each item in isolation is small. The composite is an attack surface that the March report does not describe and cannot describe. The report is not wrong. It is historical.

The frame shift: snapshot vs continuous

The industry is moving from an assurance model based on snapshots to one based on continuous evaluation. This is not a marketing distinction. It is a different mental model for what “tested” means.

Snapshot assurance says: at a fixed point in time, an expert team attempted to exploit our systems and documented what they found. We fixed most of the findings. This is a snapshot of a moment that has already passed.

Continuous assurance says: we have automated adversary simulation running against our production-equivalent environment on every meaningful change, we have an attack surface management capability monitoring what we expose externally, and we have periodic deep-dive human testing for the things automation cannot find. The state of the system is being continuously tested, not tested once a year.

The second model is not more expensive than the first in aggregate. It is differently expensive: less money on the annual report, more money on tooling and the operational discipline to remediate on a rolling basis. In regulated environments, it is also closer to what the regulators actually want. Joint Standard 2 in South Africa, DORA in the EU, and the PCI DSS 4.0 evolution of requirements all point in the same direction: continuous, evidence-based assurance rather than point-in-time certification.

What I would change if I were you

First, add attack surface management. ASM is not pentesting. It is the inventory that pentesting depends on. Most of the high-severity findings I report in pentests are on assets the client did not know they had exposed. An ASM capability, whether commercial or built in-house from Shodan and Censys data, closes that gap for a fraction of the cost of another pentest.

Second, move your application testing into the CI/CD pipeline. This is where automated pentesting and DAST tools earn their keep. They do not replace a skilled human tester, but they find the bread-and-butter findings: SQL injection, broken access control, outdated dependencies, missing security headers, before they reach production, not six months after.

Third, keep the annual deep-dive human engagement. Automation does not find complex authorisation logic flaws, business-logic abuse, or the kind of chained exploits that require a skilled tester thinking like an attacker. The annual engagement becomes narrower in scope but deeper in technique. You pay for what automation cannot do, not what it can.

Fourth, instrument your remediation. Every finding from every source, ASM, automated scan, human pentest, red team, should land in the same ticketing system, with the same triage process, the same SLA, and the same audit trail. The value of testing is in what gets fixed, not what gets found.

What this does not solve

This is not a pitch for replacing annual pentests with a platform. The pitch is that the annual pentest, as the primary assurance artefact, has stopped being defensible in any environment that ships software more than a few times a year.

The alternative is not “more pentests.” It is a different model for what assurance means, supported by different tooling, with different operational discipline. That shift is underway across the industry. The question for any CISO or technology leader reading this is whether their own institution is making the shift deliberately, or whether it will be made for them by a regulator, a breach, or a board member who read a Gartner report.

The report on my desk is dated. By the time you read this, yours is too.

Leave a Reply