The hardest question in a SaaS breach is rarely the first headline.
It's not the first public statement. It's not the first affected customer notice. It's not even the first claimed record count.
The hard question is what the attacker could reach.
That question is uncomfortable because it usually has to be answered while the facts are still moving. The incident is active or recently contained. Customers want answers. Regulators may be watching. The company is still validating logs, tenant boundaries, affected products, support tooling, integrations, messages, files, and internal access paths.
The Canvas incident is a useful example of this problem.
Instructure has publicly acknowledged a Canvas security incident involving certain user data, including names, email addresses, user identifiers, and messages. Schools and universities have published their own notices. Reporting has also described service disruption, login-page defacement, extortion pressure, and an agreement with the unauthorised actor.
Some details remain unsettled. That's normal in a live breach story.
For security teams, the important lesson is not the final record count. It's the blast-radius problem every SaaS provider faces when attacker access touches a shared platform.
In a single-tenant environment, breach scope is hard enough.
In SaaS, scope has more dimensions.
A provider has to understand which customers were affected, but also how the platform boundaries behaved during the incident. It has to know whether the attacker reached one product surface or several. It has to know whether messages, support data, integration metadata, API paths, logs, files, and account attributes were in scope.
The question is not only:
How many records were exposed?
The earlier question is:
Which paths were reachable from the access the attacker had?
That's a different kind of investigation.
It asks what the attacker could do before the company can prove what they did.
SaaS platforms ask customers to trust shared control planes, shared support workflows, shared identity systems, and shared product infrastructure.
That trust is normal. It's why SaaS works.
During an incident, it also means the provider has to answer questions that customers cannot answer themselves.
Which tenants were reachable?
Which user objects were accessible?
Which messages or files were exposed?
Which admin or support paths were involved?
Which integrations could have been queried?
Which customer environments need different answers?
These are not public-relations questions. They are architecture and detection questions.
If a provider cannot quickly reason about blast radius, every customer has to assume more risk while waiting for the final investigation.
Public breach reporting often jumps between two poles.
At the start, there is uncertainty.
At the end, there is a count.
That misses the operational work in the middle.
When a SaaS provider is breached, defenders need to reconstruct how access progressed. They need to know which identity, session, account, support tool, API, workflow, or system first gave the attacker useful reach.
Then they need to model what that access made possible.
Could the attacker enumerate tenants?
Could they read message content?
Could they reach attachments or files?
Could they pivot into admin workflows?
Could they use product features to affect availability?
Could they reach integrations or downstream systems?
Some of those questions may eventually be ruled out. That's the point.
Good response depends on being able to rule things out with evidence, not hope.
The difficult part is that SaaS platforms are full of legitimate access paths.
Support staff need to help customers.
Services need to read and write tenant data.
Integrations need API access.
Administrators need management functions.
Background jobs need broad operational reach.
Those paths are not automatically dangerous. They are how the product operates.
They become dangerous when an attacker starts using them to move closer to sensitive data or high-impact product functions.
That's why event-by-event detection is weak on its own.
A login may be valid.
An API call may be allowed.
A message lookup may use an expected service path.
A support workflow may exist for a real operational reason.
The signal is whether the sequence increases reach.
Did the attacker move from one identity to a broader role? Did a support path expose tenant data? Did an integration token make customer content reachable? Did activity converge on data that would create customer, regulatory, or business impact?
That's the progression defenders need to see.
Every SaaS security team should be able to answer a few questions before the next incident:
Those questions are useful before an incident.
They are vital during one.
vec0 is built around a simple idea: breaches unfold as paths toward sensitive data.
For SaaS providers, those paths often run through identities, roles, service accounts, support systems, integrations, APIs, and shared platform infrastructure.
The job is not only to alert that something happened. The job is to understand whether access is progressing toward data that matters.
That means measuring how each action changes reachability.
Did this identity get closer to tenant data?
Did this role assumption open a new customer-data path?
Did this support action touch a more sensitive system?
Did this sequence stay ordinary in isolation but become meaningful as a path?
That's the difference between noise and blast-radius signal.
Final breach scope will always matter.
Customers deserve accurate answers. Regulators need facts. Companies should not overstate what they know.
But SaaS providers cannot wait for the final record count before they understand the shape of the incident.
They need an earlier answer:
From the access the attacker had, what could they reach?
That's where the blast radius starts.
And it's where defenders still have a chance to contain the breach before access becomes deeper, broader, and harder to explain.
Chat with founder: [email protected]
Follow vec0: https://www.linkedin.com/company/vec0/