Categories
CERT Politics

A review of the “Concluding report of the High-Level Group on access to data for effective law enforcement”

(Crossposted from the CERT.at blog.)

As I’ve written here, the EU unveiled a roadmap for addressing the encryption woes of law enforcement agencies in June 2025. As a preparation for this push, a “High-Level Group on access to data for effective law enforcement” has summarized the problems for law enforcement and developed a list of recommendations.

(Side note: While the EU Chat Control proposal is making headlines these days and has ben defeated – halleluja -, the HLG report is dealing with a larger topic. I will not comment on the Chat Control/CSAM issues here at all.)

I’ve read this this report and its conclusions, and it is certainly a well-argued document, but strictly from a law enforcement perspective. Some points are pretty un-controversial (shared training and tooling), others are pretty spicy. In a lot of cases, it hedges by using language similar to the one used by the Commission:

In 2026, the Commission will present a Technology Roadmap on encryption to identify and evaluate solutions that enable lawful access to encrypted data by law enforcement, while safeguarding cybersecurity and fundamental rights.

They are not presenting concrete solutions; they are hoping that there is a magic bullet which will fulfill all the requirements. Let’s see what this expert group will come up with.

Categories
CERT Pet Peeves

NIS2 in Austria

We still don’t have a NIS2 law in Austria. We’re now more than a year late. As I just saw Süleyman’s post on LinkedIn I finally did the quick photoshop job I planned to do for a long time.

Original:

See https://en.wikipedia.org/wiki/Joel_Pett

NIS2 Version:

(Yes, this is a gross oversimplification. For the public administration side, we really need the NIS2 law, but for private companies who will be forced to conform to security standards: what’s holding you back from implementing them right now?)

Categories
CERT Internet Politics

Chat Control vs. File Sharing

The spectre of “law-enforcement going dark“ is on the EU agenda once again. I’ve written about the unintended consequences of states using malware to break into mobile phones to monitor communication multiple times. See here and here. Recently it became known that yet another democratic EU Member state has employed such software to spy on journalists and other civil society figures – and not on the hardened criminals or terrorists which are always cited as the reason why these methods are needed.

Anyway, I want to discuss a different aspect today: the intention of various law enforcement agencies to enact legislation to force the operators of “over-the-top” (OTT) communication services (WhatsApp, Signal, iChat, Skype, …) to implement a backdoor to the end-to-end encryption feature that all modern applications have introduced over the last years. When I talked to a Belgian public prosecutor last year about that topic he said: “we don’t want a backdoor for the encryption, we want the collaboration of the operators to give us access when we ask for it”

Let’s assume the law enforcement folks win the debate in the EU and chat control becomes law. How might this play out?

Categories
CERT Internet

Feedback to the NIS2 Implementing Acts

The EU is asking for feedback regarding the Implementing Acts that define some of the details of the NIS2 requirements with respect to reporting thresholds and security measures.

I didn’t have time for a full word-for-word review, but I took some time today to give some feedback. For whatever reason, the EU site does not preserve the paragraph breaks in the submission, leading to a wall of text that is hard to read. Thus I’m posting the text here for better readability.

Feedback from: Otmar Lendl

We will have an enormous variation in size of relevant entities. This will range from a 2-person web-design and hosting team who also hosts the domains of its customers to large multinational companies. The recitals (4) and (5) are a good start but are not enough.

The only way to make this workable is by emphasising the principle of proportionality and the risk-based approach. This can be either done by clearly stating that these principles can override every single item listed in the Annex, or consistently use such language in the list of technical and methodological requirements.

Right now, there is good language in several points, e.g., 5.2. (a) “establish, based on the risk assessment”, 6.8.1. “[…] in accordance with the results of the risk assessment”, 10.2.1. “[…] if required for their role”, or 13.2.2. (a) “based on the results of the risk assessment”.

The lack of such qualifiers in other points could be interpreted as that these considerations do not apply there. The text needs to clearly pre-empt such a reading.

In the same direction: exhaustive lists (examples in 3.2.3, 6.7.2, 6.8.2, 13.1.2.) could lead to auditors doing a blind check-marking exercise without allowing for entities to diverge based on their specific risk assessment.

A clear statement on the security objective before each list of measures would also be helpful to guide the entities and their auditors to perform the risk-based assessment on the measure’s relevance in the concrete situation. For example, some of the points in the annex are specific to Windows-based networks (e.g., 11.3.1. and 12.3.2. (b)) and are not applicable to other environments.

As the CrowdStrike incident from July 19th showed, recital (17) and the text in 6.9.2. are very relevant: there are often counter-risks to evaluate when deploying a security control. Again: there must be clear guidance to auditors to also follow a risk-based approach when evaluating compliance.

The text should encourage adoption of standardized policies: there is no need to re-invent the wheel for every single entity, especially the smaller ones.

Article 3 (f) is unclear, it would be better to split this up in two items, e.g.:

(f1) a successful breach of a security system that led to an unauthorised access to sensitive data [at systems of entity] by an external and suspectedly malicious actor was detected.

(Reason: a lost password shouldn’t cause a mandatory report, using a design flaw or an implementation error to bypass protections to access sensitive data should)

(f2) a sustained “command and control” communication channel was detected that gives a suspectedly malicious actor unauthorised access to internal systems of the entity.

Categories
CERT System Administration

RT: Different From: for a certain user

At CERT.at, we recently changed the way we send out bulk email notifications with RT: All Correspondence from the Automation user will have a different From: address compared to constituency interactions done manually by one of our analysts.

How did I implement this?

  • The first idea was to use a different queue to send the mails, and then move the ticket to the normal one: The From: address is one of the things that can be configured on a queue-level.
  • Then I tried whether I can override the From: header by including it in the Correspondence Template. Yep, that works. Thus idea two: modify the Scrips in a way to have different ones trigger based on who is running the transaction.
  • But it’s even simpler: the Template itself can contain Perl code, so I can just do the “if then else” thingy inside the Template.

In the end, it was just a one-liner in the right spot. Our “Correspondence”-Template looks like this now:

RT-Attach-Message: yes
Content-Transfer-Encoding: 8bit{($Transaction->CreatorObj->Name eq 'intelmq') ? "\nFrom: noreply\@example.at" : ""}

{$Transaction->Content()}

The “Content-Transfer-Encoding: 8bit” was needed because of the CipherMail instance, without it we could get strange MIME encoding errors.

Categories
CERT Internet Pet Peeves

Roles in Cybersecurity: CSIRTs / LE / others

(Crossposted from the CERT.at blog)

Back in January 2024, I was asked by the Belgian EU Presidency to moderate a panel during their high-level conference on cyber security in Brussels. The topic was the relationship between cyber security and law enforcement: how do CSIRTs and the police / public prosecutors cooperate, what works here and where are the fault lines in this collaboration. As the moderator, I wasn’t in the position to really present my own view on some of the issues, so I’m using this blogpost to document my thinking regarding the CSIRT/LE division of labour. From that starting point, this text kind of turned into a rant on what’s wrong with IT Security.

When I got the assignment, I recalled a report I had read years ago: “Measuring the Cost of Cybercrime” by Ross Anderson et al from 2012. In it, the authors try to estimate the effects of criminal actors on the whole economy: what are the direct losses and what are costs of the defensive measures put in place to defend against the threat. The numbers were huge back then, and as various speakers during the conference mentioned: the numbers have kept rising and rising and the figures for 2024 have reached obscene levels. Anderson et al write in their conclusions: “The straightforward conclusion to draw on the basis of the comparative figures collected in this study is that we should perhaps spend less in anticipation of computer crime (on antivirus, firewalls etc.) but we should certainly spend an awful lot more on catching and punishing the perpetrators.”

Over the last years, the EU has proposed and enacted a number of legal acts that focus on the prevention, detection, and response to cybersecurity threats. Following the original NIS directive from 2016, we are now in the process of transposing and thus implementing the NIS 2 directive with its expanded scope and security requirements. This imposes a significant burden on huge numbers of “essential” and “important entities” which have to heavily invest in their cybersecurity defences. I failed to find a figure in Euros for this, only the estimate of the EU Commission that entities new to the NIS game will have to increase their IT security budget by 22 percent, whereas the NIS1 “operators of essential services” will have to add 12 percent on their current spending levels. And this isn’t simply CAPEX, there is a huge impact on the operational expenses, including manpower and effects on the flexibility of the entity.

This all adds up to a huge cost for companies and other organisations.

What is happening here? We would never ever tolerate that kind of security environment in the physical world, so why do we allow it to happen online?

Categories
CERT Pet Peeves

On Cybersecurity Alert Levels

Last week I was invited to provide some input to a tabletop exercise for city-level crisis managers on cyber security risks and the role of CSIRTs. The organizers brought a color-coded threat-level sheet (based on the CISA Alert Levels) to the discussion and asked whether we also do color-coded alerts in Austria and what I think of these systems.

My answer was negative on both questions, and I think it might be useful if I explain my rationale here. The first was rather obvious and easy to explain, the second one needed a bit of thinking to be sure why my initial intuition to the document was so negative.

Escalation Ratchet

The first problem with color-coded threat levels is their tendency to be a one-way escalation ratchet: easy to escalate, but hard to de-escalate. I’ve been hit by that mechanism before during a real-world incident and that led me to be wary of that effect. Basically, the person who raises the alert takes very little risk: if something bad happens, she did the right thing, and if the danger doesn’t materialize, then “better safe than sorry” is proclaimed, and everyone is happy, nevertheless. In other words, raising the threat level is a safe decision.


On the other hand, lowering the threat level is an inherently risky decision: If nothing bad happens afterwards, there might be some “thank you” notes, but if the threat materializes, then the blame falls squarely on the shoulders of the person who gave the signal that the danger was over. Thus, in a CYA-dominated environment like public service, it is not a good career move to greenlight a de-escalation.


We’ve seen this process play out in the non-cyber world over the last years, examples include

  • Terror threat level after 9/11
  • Border controls in the Schengen zone after the migration wave of 2015
  • Coming down from the pandemic emergency

That’s why I’ve been always pushing for clear de-escalation rules to be in place whenever we do raise the alarm level.

Cost of escalation

For threat levels to make sense, any level above “green” need to have a clear indication what the recipient of the warnings should be doing at this threat level. In the example I saw, there was a lot of “Identify and patch vulnerable systems”. Well, Doh! This is what you should be doing at level green, too.


Thus, relevant guidance at higher level needs to be more than “protect your systems and prepare for attacks”. That’s a standing order for anyone doing IT operation, this is useless advice as escalation. What people need to know is what costs they should be willing to pay for a better preparation against incidents.


This could be a simple thing like “We expect a patch for a relevant system to be released out of our office-hours, we need to have a team on standby to react as quickly as possible, and we’ve willing to pay for the overtime work to have the patch deployed ASAP.”. Or the advice could be “You need to patch this outside your regular patching cadence, plan for a business disruption and/or night shifts for the IT people.” At the extreme end, it might even be “we’re taking service X out of production, the changes to the risk equation mean that its benefits can’t justify the increased risks anymore.”.


To summarize: if there were no hard costs to a preventative security measure, then you should have implemented them a long time ago, regardless of any threat level board.

Counterpoint

There is definitely value in categorizing a specific incident or vulnerability in some sort of threat level scheme: A particularly bad patch day, or some out-of-band patch release by an important vendor certainly is a good reason that the response to the threat should also be more than business-as-usual.


But a generic threat level increase without concrete vulnerabilities listed or TTPs to guard against? That’s just a fancy way of saying “be afraid” and there is little benefit in that.

Postscript: Just after posting this article, I stumbled on a fediverse post making almost the same argument, just with April 1st vs. the everyday flood of misinformation.

Categories
CERT

A classification of CTI Data feeds

(Crossposted on the CERT.at blog.)

I was recently involved in discussions about the procurement of Cyber Threat Intelligence (CTI) feeds. This lead to discussions on the nature of CTI and what to do with it. Here is how I see this topic.

One way to structure and classify CTI feeds is to look at the abstraction level at which they operate. As Wikipedia puts it: 

  • Tactical: Typically used to help identify threat actors (TAs). Indicators of compromise (such as IP addresses, Internet domains or hashes) are used and the analysis of tactics, techniques and procedures (TTP) used by cybercriminals is beginning to be deepened. Insights generated at the tactical level will help security teams predict upcoming attacks and identify them at the earliest possible stages.
  • Operational: This is the most technical level of threat intelligence. It shares hard and specific details about attacks, motivation, threat actor capabilities, and individual campaigns. Insights provided by threat intelligence experts at this level include the nature, intent, and timing of emerging threats. This type of information is more difficult to obtain and is most often collected through deep, obscure web forums that internal teams cannot access. Security and attack response teams are the ones that use this type of operational intelligence.
  • Strategic: Usually tailored to non-technical audiences, intelligence on general risks associated with cyberthreats. The goal is to deliver, in the form of white papers and reports, a detailed analysis of current and projected future risks to the business, as well as the potential consequences of threats to help leaders prioritize their responses.
Categories
CERT Internet

Boeing vs. Microsoft

2019: Boeing uses a single “angle of attack” sensor to control the automatic pitch control software (MCAS). A software upgrade to use both AOA sensors was sold as an optional extra. Result: two planes crashed, killing 346 people. The damages and the subsequent grounding of the whole 373 Max fleet cost Boeing a few billion dollars.

2023: Microsoft sold access to advanced logs in their MS-365 service as an add-on, premium product. Luckily, someone did buy it, and used the data provided to detect a significant intrusion in the whole Azure infrastructure.

For good or bad, we do not treat security as seriously as safety.

Yet.

I think this will change for critical infrastructure.

Microsoft should think hard what that would mean. If one can ground a fleet of planes for safety reasons, I see no fundamental reason not to temporarily shut down a cloud service until it can prove that it solved a design flaw.

We just need a regulatory authority with a bit more desire to be taken seriously.

Categories
CERT

A network of SOCs?

Preface

I wrote most of this text quickly in January 2021 when the European Commission asked me to apply my lessons learned from the CSIRTs Network to a potential European Network of SOCs. During 2022, the plans for SOC collaboration have been toned down a bit, the DIGITAL Europe funding scheme proposes multiple platforms where SOCs can work together. In 2023, the newly proposed “Cyber Solidarity Act” builds upon this and codifies the concept of a “national SOC” and “cross-border SOC platforms” into an EU regulation.

At the CSIRTs Network Meeting in Stockholm in June 2023 I gave a presentation on  the strenghts and flaws in the CSoA approach. A position paper / blog-post on that is in the works.

The original text (with minor edits) starts below.