Categories
CERT

A network of SOCs?

Crossposted on the CERT.at blog.

Preface

I wrote most of this text quickly in January 2021 when the European Commission asked me to apply my lessons learned from the CSIRTs Network to a potential European Network of SOCs. During 2022, the plans for SOC collaboration have been toned down a bit, the DIGITAL Europe funding scheme proposes multiple platforms where SOCs can work together. In 2023, the newly proposed “Cyber Solidarity Act” builds upon this and codifies the concept of a “national SOC” and “cross-border SOC platforms” into an EU regulation.

At the CSIRTs Network Meeting in Stockholm in June 2023 I gave a presentation on  the strenghts and flaws in the CSoA approach. A position paper / blog-post on that is in the works.

The original text (with minor edits) starts below.

Context

The NIS Directive established the CSIRTs Network (CNW) in 2016, and the EU Cybersecurity Strategy from 2020 tries to do something similar for SOCs (Security Operation Centres).

I was asked by DG-CNECT to provide some lessons identified from the CWN that might be applicable for the SOC Network (SNW).

The following points are not a fully fleshed out whitepaper, instead they are a number of propositions with short explanations.

The most important point is that one cannot just focus on the technical aspects of SOC collaboration. That is the easy part. We know which tools work. The same stack that we developed for the CSIRTs Network can almost 1:1 support SOC networks.

Our colleagues from CCN-CERT presented the Spanish SOC Network at various meetings recently. Yes, there was one slide with their MISP setup, but the main content was the administrative side and the incentive structure they built to encourage active participation by all members.

Human Element

Trust

Any close cooperation needs a basic level of trust between participants. The more sensitive the topic and the more damage could potentially be done by the misuse of information shared between the organisations, the more trust is needed for effective collaboration.

There must be an understanding that one can rely on others to keep secrets, and to actually communicate if something important for the partner is learned.

Trust is not binary

Trust is not a binary thing: There is more than “I trust” or “I don’t trust”; it always depends on the concrete case if you trust someone enough to cooperate in this instance.

Trust needs Time

Some basic level of trust is given to others based on their position (e.g., I trust the baker to sell me edible bread; I trust every police officer to do the basics correctly), but only repeated interactions with the same person/organisation increases the trust over time. (See “The Evolution of Cooperation”)

Thus, one needs to give all these networks time to establish themselves and the trust relationships.

These things really take time. We are talking about years.

Physical meetings (incl. social events) help

Bringing people together is very helpful to bootstrap cooperation.

You can’t legislate Trust

There are limited possibilities to declare ex cathedra that one has to trust someone. It might work do certain degree if people are forced by external events to collaborate (e.g., call the police if you have to deal with a significant crime; or reporting requirements to authorities; or hand your kids over to day-care/school/ …).

Even in these cases, these organisations have to be very careful about their reputation: misuse of their trust positions will significantly affect how much trust is given, even under duress.

Persons or Teams

Trust can be either anchored to persons or to organisations. I might trust a certain barber shop to get my haircut right, but I’ll prefer to go the same person if the cutter got it right the last time.

Experience has shown that is possible to establish institutional trust: If I know that Team X is competently run, then I will not hesitate to use the formal contact point of that team.

Still: if something is really sensitive, I will try to reach the buddy working for that other team with whom I have bonded over beer and common incidents.

Group Size

Close cooperation in groups cannot be sustained if the number of participants increases beyond a certain limit. This has been observed in multiple fora, amongst them FIRST, TF-CSIRT, and ops-t (which was actually an experiment in scaling trust groups).

As a rule of thumb: whenever you cannot have every member of the group present their current work/topics/ideas/issues during a meeting, then the willingness to have an open sharing decreases significantly. This puts the limit at about 15 to 20 participants. If lower levels of cooperation are acceptable, then group sizes can be larger.

Corollary: Group Splits

If a group becomes too big, then there is a chance that core members will split off and create a new, smaller forum for more intense collaboration.

This is similar to what happens with groups of animals: if one pack becomes too big to be viable, it will split up.

Adding members

Organic growth from within the group works best.

An external membership process (as in the CNW, where existing members have no say over the inclusion of a new team from another EU Member State) can be very detrimental to the trust inside the group.

Motivation

Cost

Any level of participation in a network of peers is not free of costs. Nobody in this business has spare time for anything. Even just passive participation via the odd telephone conference or even just reading emails costs time and is thus not free.

Active participation, be it travelling to conferences, working on common projects, manually forwarding information, or setting up Machine to Machine (M2M) communication can carry significant costs. These must not exceed the benefits from the participation in the network.

Corollary: Separate tooling is detrimental to sharing

Sharing information into a network must be as low-friction as possible. If an analyst has to re-enter information about an incident in a different interface to share the data, then the chance is high that it will not happen. Optimally, the sharing option is built into the core systems and the overhead of sharing is just selecting with whom.

Benefits

The flip side is often not so easy to quantify: what are the concrete benefits of collaboration? If the bean-counters ask to justify the cost, there should be clear business reasons why the costs are worth it. “Interesting discussions” and “being a good corporate citizen” is not a long-term sustainable motivation.

It must be as clear as possible what value each participant will get from such a network.

Beware of freeloaders and the “Tragedy of the Commons” effect.

Peers

Networks work best between organisations that are comparable in size, their jobs, and their position in the market. Their technology and informational needs should be roughly the same. They should face similar tasks and challenges. For example, the SOC of VW and the SOC of Renault should have roughly the same job and thus an exchange of experiences and data might be mutually beneficial.

Vendor/customer mix can kill networks

If two members of a network are actually in a vendor/customer relationship in terms of cyber security, then this is a strong detriment to collaboration. Even just a potential sale is tricky: if one member is describing his problem, then someone else should not be in the position to offer his own commercial product of service to address that problem.

I have seen this work only if the representative of the vendor can clearly differentiate between his role as network partner and his pre-sales job. This is the exception, not the rule.

Competition (1)

Ideally, the member of the network should be in no competition to one another. Example: the security team of Vienna’s city hospitals and the equivalent team of the Berlin Charité are a best case: their hosting organisations are working in the same sector, but there is absolutely no competition for customers between those two.

If the hosting organisations are actually competing with each other (see the VW vs. Renault example from above, or different banks), then a cooperation on IT security is not a given. Nevertheless, it is also not impossible, as competitors are often collaborating with respect to lobbying, standardization or interconnection. One positive example I have seen are the Austrian banks, who are cooperating about e-banking security based on the premise that the customers will not differentiate between “e-banking at Bank X is insecure” and “e-banking is insecure”.

Competition (2)

Even trickier is the case of SOCs not just protecting the infrastructure of their respective hosting organisation, but also offering their services on a commercial basis to any customer (“SOC outsourcing”). Anything one SOC shares with the network then potentially helps a direct competitor. Example: both Deloitte and Cap Gemini offer SOC outsourcing and Threat Intel reporting. Their knowledge base is their competitive advantage and why should they share this freely with a competitor when they are selling the same information to a customer?

Such constellations are extremely difficult, but not impossible to manage.

The trick to deal with competition in such networks is to move the collaboration to a purely operational / technical layer. These people are used to deal with their peers in a productive way.

Alignment of interest

This all boils down to

  • Is it a good commercial decision for my SOC to participate in the network?
  • Is it a good commercial decision to share data into the network?

Resources

All members must make the clear management decision to participate in such network and must allocate human power to it. In some way, such networks operate a bit like amateur sport clubs or open-source projects: they thrive based on the voluntary work done by their members. I have seen too many cases where such networks fail simply because members lost interest and did not invest time and effort in running them effectively.

Running a network

Secretariat

While not strictly necessary, a paid back-office increases the chances of success significantly. Someone has to organize meetings, write minutes, keeps tracks on memberships, produces reports, and provides an external point of contact.

Doing this on a voluntary basis might work for very small and static networks, where a round-robin chair role can succeed.

Connecting people

Bringing people together is the basis foundation of a collaboration network. Only in the case where the network is only the distribution of information from a handful of central sources to all members (i.e., a one-way information flow), then this might not be needed. This can be done by (in order of importance)

  • Physical meetings (conferences, workshops, …)
  • Continuous low-friction instant messaging
  • Mailing-lists
  • Web-Forums

Generic central tooling

Any network, regardless of topic, needs a few central tooling components:

  • A directory of members (preferably with self-service editing)
  • A file repository
  • An administrative mailing-list
  • A topical mailing-list
  • An instant messaging facility

A decent Identity and Access management covering all these tools is recommended (but not strictly necessary in the first iteration). The toolset created for the CSIRTs Network (MeliCERTes 2) can help here.

Exchange of Information

In the end, the main motivation of such network is information sharing with the intention of making members more effective in their core task. Here are some thoughts on that aspect:

Compatible levels of Maturity

If members are at very different levels of technological and organisational maturity, then any information exchange is of limited value. A common baseline is helpful.

Human to Human

This is the easiest information exchange to get going, and some topics really need to be covered on the human layer: people can talk about experiences, about cases, about what works and what does not.

It is also possible to exchange Cyber Threat Intelligence (CTI) between humans: the typical write-up of a detected APT campaign, including all the Indicators (IoCs) found during the incident response, is exactly that.

This sounds easy, but is costly in terms of human time. On the receiving side, the SOC needs to operationalize the information contained to make the automated systems detect a similar campaign in the local constituency.

Information Management

The way a SOC is gathering, storing, correlating and de-duplicating the CTI that is powering its detection capability is a core element in the SOC internal workflow. Its maturity in this respect drives the possibilities of collaboration on the topic of CTI.

One (not uncontroversial) theory on this topic is the “Pyramid of Pain” concept from David Bianco, where he describes the levels of abstractions in CTI. The lower levels are easy for SIEMs to detect, but also trivial for the threat actor to change. The challenge for SOCs is to operate at a higher level than what the threat actors is prepared to change frequently.

CTI M2M

In theory, SOCs should be able to cross-connect their CTI systems to profit from each other’s learnings and thus increase the overall detection capability of SOC Network. Regrettably, this is non-trivial on multiple fronts:

Data protection / customer privacy

They must be ensure that no information about the customer where the CTI was found during IR, leaks out. Sometimes this is easy and trivial sometimes it is not. Thus, unless the SOC is very mature at entering CTI into their system, people will want to check manually what is being shared.

Data licencing

Many SOCs buy CTI data from commercial sources. Such data needs to be excluded from automatic data sharing.

Data compatibility

While there are a number of standards for CTI data exchange (e.g., STIXX/TAXII, MISP or Sigma rules), this is far from being a settled topic. Especially if you want to move up in the pyramid of pain.

Sharing tools

In addition to sharing information, it is also possible that members of the network share the tools they have written to perform various aspects of a SOCs job.