Declaration of Compromise

Detection Engines

In order to eliminate all blind spots, any good NDR system must have multiple detection engines because there is no single mechanism that can detect every and all threats.

Stamus Central Server (SCS) uses different technologies, sometimes in combination, to provide meaningful detection as summarized in the following table.

Produced Events

Detection Technology

Use Case

IDS Alerts

Rules-based / Signatures Detection

Suricata detection engine allows you to run thousands of rules in one single probe to detect all kinds of threats or misconfiguration.

We offer 3 sources for signatures out of the box: ETPro rules from Proofpoint (ex: Emerging Threats),

Stamus Threat Research Rules validated and created by Stamus Security Research Team,

Your own custom rules!

Homoglyph Alerts

Advanced Analytics

Homoglyph alerts are generated when access to suspicious domain names occurs, such as in the case of Phishing attacks where attackers try to mimic legitimate domains by replacing one letter by another for example.

Sightings Events

Behavioral Analytics

Sightings events are generated when one host accesses a new destination (domain or IP) for the first time. This approach is correlated with each host’s main role (printer, domain controller, …) to only alert on what really matters.

Beaconing Events

Machine Learning & Anomaly detection

SCS detects beacon events on SSL traffic by first discovering suspicious TLS transactions and then computing the periodicity of the transactions.

IDS Alerts

Threat Intelligence & Signatures detection

SCS allows loading IOCs feeds to use them in combination with Suricata signatures to alert when accesses are made to known suspicious domains or IPs for example.

Each of those detection engines have been designed to detect threats in different manners and all the output produced by all those engines are for us Indicators of Compromise.

Indicators of Compromise (IoC)

Historically, Indicators of Compromise pointed out atomic indicators such as an IP address, a domain name, a hash value (md5, sha256, …), and so on.

At Stamus Network, we extended this definition to also include composed indicators such as an IDS alert, a Machine Learning event, a beacon, etc. By definition, such events, or alerts, will contain multiple fields/values (so they aren’t atomic).

In short, an Indicator of Compromise is any technical piece of information, atomic or composed, describing a threat that may or may not have happened - or is happening - in a defended environment.

What’s important to understand with Indicators of Compromise, is that there is an implicit attribute, the likelihood of relevance, that characterizes such indicators.

In reality, when we search for an IP address with bad reputation or when we receive an IDS alert, we have no clue on the likelihood of relevance. This means, from an operational point of view, that we don’t know how relevant the alert is (i.e. is it a False Positive?) so this will trigger an investigation to assess if the detected threat is really alive in the defended environment.

IoC = (TechnicalInformations; LikelihoodOfRelevance)

It is interesting to note that the likelihood of relevance is directly dependent on the detection method used or the quality of the data source used (ex: third party Threat Intelligence feed).

We define the following scale of confidence for indicators:

  • low confidence: Those IoCs have more likelihood to be False Positive rather than True Positive. This is mostly noise but it doesn’t exclude having from time to time True Positives (so don’t put everything into the trash bin!)

  • intermediate confidence: Those IoCs are the kind that are generated by detection methods or data source producing indicators with a relatively low volume of false positives (but it doesn’t mean they don’t exist). In that case, we would be in a situation where it’s fifty-fifty and an analyst would still need to investigate to further qualify the reality of the threat. Obviously, intermediate, and low, confidence indicators cannot be used for automated responses as the risk of disrupting network or user activity is too high.

  • high confidence: Those IoCs are the ones that rarely have False Positives. In short, when something triggers with such confidence, you want an analyst to investigate as soon as possible because something is most probably going on into the defended environment. With high confidence indicators, this is where one can start automating responses without risking to disrupt network or user activities.

For example, when an average Suricata signature generates alerts, we are relatively confident that’s something we want to investigate but we don’t know for a certainty if there is a real threat until the investigation has been completed by an analyst. Hence, the likelihood of relevance is intermediate (i.e. fifty-fifty).

Finally, we should note that this scale has been built without threshold numbers, such as “if below 53% rate, then it’s low confidence”. This is done on purpose because classifying the likelihood of relevance of an indicator is as much an art than it is a science. The Stamus Threat Research Team is here for that purpose, they are our artists :)

Declaration of Compromise (DoC)

At Stamus Networks, we developed the concept of Declaration of Compromise (DoC) as an extension of Indicators of Compromise (IoC).

We wanted to have means to start progressing towards automated responses and we also acknowledged that this cannot be achieved without high confidence in the emitted alerts. No one wants to automate responses if there is a significant risk of disrupting network or users activities!

In short, an Indicator of Compromise indicates that an issue (i.e. a threat) may be happening in the defended context while a Declaration of Compromise indicates that an issue is happening in the defended context (and requires an immediate action).

So, are DoCs just IoCs with high confidence ? Yes but no :-)

We define a DoC as the following:

Declaration of Compromise = (Asset, Threat, States)

A DoC is the association of one threat impacting one asset and some stateful information such as when it was first seen, when it was last seen, the killchain phase the threat is in, and so on.

For example, if we have the same threat, such as Trickbot for example, on 5 different systems, we would have 5 Declarations of Compromise in the Stamus Central Server. Similarly, if we have 5 unique threats on a 1 host, we will have 5 DoCs as well.

Any Indicator of Compromise, regardless of their likelihood of relevance, can be elevated to a Declaration of Compromise whether it is manually by an analyst or automatically by SCS using custom defined rules. The subtlety comes for high-confidence indicators which can be automatically elevated as DoCs using Policies for example because DoC implies a very high confidence that the issue is real. That’s the case for events produced by SCS such as event_type = stamus.