Categories
CERT Internet

Feedback to the NIS2 Implementing Acts

The EU is asking for feedback regarding the Implementing Acts that define some of the details of the NIS2 requirements with respect to reporting thresholds and security measures.

I didn’t have time for a full word-for-word review, but I took some time today to give some feedback. For whatever reason, the EU site does not preserve the paragraph breaks in the submission, leading to a wall of text that is hard to read. Thus I’m posting the text here for better readability.

Feedback from: Otmar Lendl

We will have an enormous variation in size of relevant entities. This will range from a 2-person web-design and hosting team who also hosts the domains of its customers to large multinational companies. The recitals (4) and (5) are a good start but are not enough.

The only way to make this workable is by emphasising the principle of proportionality and the risk-based approach. This can be either done by clearly stating that these principles can override every single item listed in the Annex, or consistently use such language in the list of technical and methodological requirements.

Right now, there is good language in several points, e.g., 5.2. (a) “establish, based on the risk assessment”, 6.8.1. “[…] in accordance with the results of the risk assessment”, 10.2.1. “[…] if required for their role”, or 13.2.2. (a) “based on the results of the risk assessment”.

The lack of such qualifiers in other points could be interpreted as that these considerations do not apply there. The text needs to clearly pre-empt such a reading.

In the same direction: exhaustive lists (examples in 3.2.3, 6.7.2, 6.8.2, 13.1.2.) could lead to auditors doing a blind check-marking exercise without allowing for entities to diverge based on their specific risk assessment.

A clear statement on the security objective before each list of measures would also be helpful to guide the entities and their auditors to perform the risk-based assessment on the measure’s relevance in the concrete situation. For example, some of the points in the annex are specific to Windows-based networks (e.g., 11.3.1. and 12.3.2. (b)) and are not applicable to other environments.

As the CrowdStrike incident from July 19th showed, recital (17) and the text in 6.9.2. are very relevant: there are often counter-risks to evaluate when deploying a security control. Again: there must be clear guidance to auditors to also follow a risk-based approach when evaluating compliance.

The text should encourage adoption of standardized policies: there is no need to re-invent the wheel for every single entity, especially the smaller ones.

Article 3 (f) is unclear, it would be better to split this up in two items, e.g.:

(f1) a successful breach of a security system that led to an unauthorised access to sensitive data [at systems of entity] by an external and suspectedly malicious actor was detected.

(Reason: a lost password shouldn’t cause a mandatory report, using a design flaw or an implementation error to bypass protections to access sensitive data should)

(f2) a sustained “command and control” communication channel was detected that gives a suspectedly malicious actor unauthorised access to internal systems of the entity.

Categories
Internet

Browsertab Dump 2024-07-23

I keep accumulating pages in browser tabs that I should read and/or remember, but sometimes it’s really time to clean up. So I’m trying something new: dump the links here in a blog post.

Categories
CERT System Administration

RT: Different From: for a certain user

At CERT.at, we recently changed the way we send out bulk email notifications with RT: All Correspondence from the Automation user will have a different From: address compared to constituency interactions done manually by one of our analysts.

How did I implement this?

  • The first idea was to use a different queue to send the mails, and then move the ticket to the normal one: The From: address is one of the things that can be configured on a queue-level.
  • Then I tried whether I can override the From: header by including it in the Correspondence Template. Yep, that works. Thus idea two: modify the Scrips in a way to have different ones trigger based on who is running the transaction.
  • But it’s even simpler: the Template itself can contain Perl code, so I can just do the “if then else” thingy inside the Template.

In the end, it was just a one-liner in the right spot. Our “Correspondence”-Template looks like this now:

RT-Attach-Message: yes
Content-Transfer-Encoding: 8bit{($Transaction->CreatorObj->Name eq 'intelmq') ? "\nFrom: noreply\@example.at" : ""}

{$Transaction->Content()}

The “Content-Transfer-Encoding: 8bit” was needed because of the CipherMail instance, without it we could get strange MIME encoding errors.

Categories
CERT Internet Pet Peeves

Roles in Cybersecurity: CSIRTs / LE / others

(Crossposted from the CERT.at blog)

Back in January 2024, I was asked by the Belgian EU Presidency to moderate a panel during their high-level conference on cyber security in Brussels. The topic was the relationship between cyber security and law enforcement: how do CSIRTs and the police / public prosecutors cooperate, what works here and where are the fault lines in this collaboration. As the moderator, I wasn’t in the position to really present my own view on some of the issues, so I’m using this blogpost to document my thinking regarding the CSIRT/LE division of labour. From that starting point, this text kind of turned into a rant on what’s wrong with IT Security.

When I got the assignment, I recalled a report I had read years ago: “Measuring the Cost of Cybercrime” by Ross Anderson et al from 2012. In it, the authors try to estimate the effects of criminal actors on the whole economy: what are the direct losses and what are costs of the defensive measures put in place to defend against the threat. The numbers were huge back then, and as various speakers during the conference mentioned: the numbers have kept rising and rising and the figures for 2024 have reached obscene levels. Anderson et al write in their conclusions: “The straightforward conclusion to draw on the basis of the comparative figures collected in this study is that we should perhaps spend less in anticipation of computer crime (on antivirus, firewalls etc.) but we should certainly spend an awful lot more on catching and punishing the perpetrators.”

Over the last years, the EU has proposed and enacted a number of legal acts that focus on the prevention, detection, and response to cybersecurity threats. Following the original NIS directive from 2016, we are now in the process of transposing and thus implementing the NIS 2 directive with its expanded scope and security requirements. This imposes a significant burden on huge numbers of “essential” and “important entities” which have to heavily invest in their cybersecurity defences. I failed to find a figure in Euros for this, only the estimate of the EU Commission that entities new to the NIS game will have to increase their IT security budget by 22 percent, whereas the NIS1 “operators of essential services” will have to add 12 percent on their current spending levels. And this isn’t simply CAPEX, there is a huge impact on the operational expenses, including manpower and effects on the flexibility of the entity.

This all adds up to a huge cost for companies and other organisations.

What is happening here? We would never ever tolerate that kind of security environment in the physical world, so why do we allow it to happen online?

The physical world

So, let’s look at playing field in the physical environment and see how the security responsibilities are distributed there:

Defending against low-level crime is the responsibility of every citizen and organisation: you are supposed to lock your doors, you need to screen the people you’re allowing to enter and the physical defences need to sensible: Your office doesn’t need to be a second Fort Knox, but your fences / doors / gates / security personnel need to be adequate to your risk profile. They should be good enough to either completely thwart normal burglars or at least impose such a high risk to them (e.g., required noise and time for a break-in) that most of them are deterred from even trying.

One of the jobs of the police is to keep low-level crime from spiraling out of control. They are the backup that is called by entities noticing a crime happening. They respond to alerts raised by entities themselves, their burglar alarms and often their neighbours.

Controlling serious, especially organized crime is clearly the responsibility of law enforcement. No normal entity is supposed to be able to defend itself against Al Capone style gangs armed with submachine guns. This is where even your friendly neighbourhood cop is out of his league and the specialists from the relevant branches of the security forces need to be called in. That doesn’t mean that these things never happen at all: there is organized crime in the EU, and it might take a few years before any given gang is brought under control.

Defending against physical incursions by another country is the job of the military. They have the big guns; they have the training and thus means to defend the country from outside threats. Hopefully, they provide enough deterrence that they are not needed. Additionally, your diplomats and politicians have worked to create an international environment in which no other nation even contemplates invading your country.

We can see here a clear escalation path of physical threats and how the responsibility to deal with them shifts accordingly.

The online world

Does the same apply to cyber threats? And if not, why?

The basics

The equivalent of putting a simple lock on your door is basic cyber hygiene: Firewalls, VPNs, shielding management interfaces, spam and malspam filters, a decent patch management, as well as basic security awareness training. Hopefully, this is enough to stop being a target of opportunity, where script kiddies or mass exploitation campaigns can just waltz into your network. But there is a difference: the risk of getting caught simply for trying to hack into a network is very low. Thus, these actors can just keep on trying over and over again. Additionally, this can be automated and run on a global scale.
In the real word, intrusion attempts do not scale at all. Every single case needs a criminal on site and that limits the number of tries per night and incurs a risk of being caught at each and every one of these. The result is that physical break-in attempts are rare, whereas cyber break-in attempts are so frequent that the industry has decided that “successful blocks on FW or mail-relay level per day” are no longer sensible metrics for a security solution.

And just forget about reporting these to the police. Not all intrusion attempts are actually malicious (a good part of CERT.at’s data-feeds on vulnerabilities is based on such scans), the legal treatment of such acts are unclear (especially on an international level), and the sheer mass of it overwhelms all law enforcement capabilities. Additionally, these intrusion attempts usually are cross-border, necessitating an international police collaboration. The penalties for such activities (malicious scans, sending malspam, etc.) are also often too low to qualify for international efforts.

In the physical world, the perpetrators must be present at the site of their victims. We’re not yet at the stage where thieves and burglars send remote controlled drones to break into houses and steal valuables there – unless you count the use of hired and expendable low-level criminals as such. There is thus no question about jurisdiction and the possibility of the local police to actually enforce the law. Collecting clues and evidence might not always be easy, and criminals fleeing the country before being caught is a common trope in crime literature, nevertheless there is the real possibility that the police can successfully track and then arrest the criminals.

The global nature of the Internet changes all this. As the saying goes: there is no geography on the Internet, everyone is a direct neighbour to everybody else. Just as any simple website is open to visitors from all over the world, it can be targeted by criminals from all over the globe. There is no need for the evil hackers to be on the same continent as their targets, let alone in the same jurisdiction. Thus, even if the police can collect all the necessary evidence to identify the perpetrators, it cannot just grab them off the street – they might be far out of reach of the local law enforcement.

And another point is different: usually, physical security measures are quite static. There is no monthly patch-day for your doors. I can’t recall any situation where a vendor of safes or locks had to issue an alert to all customers that they have to upgrade to new cylinders because a critical vulnerability was found in the current version (although watching LPL videos are a good argument that they should start doing that). Recent reports on vulnerabilities of keyless fobs for unlocking of cars show that the lines are starting to blur between these worlds.

Organized crime

What about serious, organized crime? The online equivalent to a mob boss is a “Ransomware as a Service (RaaS)” group: they provide the firepower, they create an efficient ecosystem of crime and they make it easier for low-level miscreants to start their criminal careers. Examples are Locky, REvil, DarkSide, LockBit, Cerber, etc. Yes, sometimes law-enforcement, through long-running, and international collaborations between law-enforcement agencies, is able to crack down on larger crime syndicates. Those take-downs vary in their effectiveness. In some cases, the police manages to get hold of the masterminds, but often enough they just get lower or mid-level people and some of the technical infrastructure, leading just to a temporary reprieve for the victims of the RaaS shop.

Two major impediments to the effectiveness of these investigations are the global nature of such gangs and thus the need for truly global LE collaboration and the ready availability of compromised systems to abuse and malicious ISPs who don’t police their own customers. Any country whose police force is not cooperating effectively creates a safe refuge for the criminals. The current geo-political climate is not helpful at all. Right now, there simply is no incentive for the Russian law enforcement to help their western colleagues by arresting Russian gangs targeting EU or US entities. Bullet-proof hosters are similar, they rent the infrastructure to criminals from which to launch attacks from. And often enough the perpetrators simply use the infrastructure of one of their victims to attack the next.

The end result is that serious cybercrime is rampant. Companies and other organisations must defend themselves against well-financed, experienced, and capable threat-actors. As it is, law enforcement is not capable to lower the threat level low enough to take that responsibility away from the operators.

Nation states

The next escalation step are the nation state attackers. They come in (at least) two types: Espionage and Disruption.

Espionage is nothing new; the employment of spies traces back to antique world. But just as with cybercrime, in the new online world it is no longer necessary to send agents on dangerous missions into foreign countries. No, a modern spy has a 9 to 5 desk job in drab office building where the highest risk to his personal safety is a herniated vertebral disc caused by unergonomic desks and chairs.

It’s been rare, but cyber-attacks with the aim of causing real world disruptions have appeared over the last ten years, especially in the Russia/Ukraine context. The impact can be similar to Ransomware: the IT systems are disabled and all the processes supported by those system will fail. The main difference is that you can’t simply buy your way out of a state-sponsored disruptive attack. There have been cases where the attackers try to inflict physical damage to either the IT systems (bricking of pcs in the Aramco attack) or machinery controlled by industrial control systems.

This is a frustrating situation. We’re in a defensive mode, trying to block and thwart attack after attack from well resourced adversaries. As the recent history shows, we are not winning this fight – cybercrime is rampant and state-sponsored APTs are running amok. Even if one organisation manages to secure its own network, the tight interconnectedness with and dependency of others will leave it exposed to supply chain risks.

What can we do about this?

Such a situation reminds me of the old proverb: “if you can’t win the game, change the rules”. I simply do not see a simple technical solution to the IT security challenge. We’ve been sold these often enough under various names (firewalls, NGFW, SIEMs, AV, EDR, SOAR, cloud-based detection, sandboxes to detected malicious e-mail, …) and while all these approaches have some value, they are fighting the symptoms, but not the cause of the problem.

There certainly are no simple solutions, and certainly none without significant downsides. I’m thus not proposing that the following ideas need to be implemented tomorrow. This article is just supposed to move the Overton Window and start a discussion outside the usual constraints.

So, what ideas can I come up with?

Really invest in Law Enforcement

The statistics show every year that cyber-crime is rising. This is followed by a ritual proclamation of the minister in charge that we will strengthen the police force tasked with prosecuting cyber-crime. The follow-through just isn’t there. Neither the police, nor the judiciary is in any way staffed to really make a dent in cybercrime as a whole.

They are fighting a defensive war, happy with every small victory they can get, but overall they are simply not staffed at a level where they really could make a difference.

Denial of safe havens

Criminals or other attackers need some infrastructure where they stage their attacks from. Why do we tolerate this? Possible avenues for a change are:

  • Revisit the laws that shield ISPs from liabilities regarding misbehaving customers. This does not need to be a complete reversal, but there need to be clear and strong incentives not to allow customers to stage attacks from an ISP’s network. See below for more details.
  • And on the other side, refuse to route the network blocks from ISPs who are known to tolerate criminals on their network. Back on Usenet, this was called the “UDP – Usenet Death Penalty”: when you don’t police your own users’ misbehaviour on this global discussion forum, then other sites will decide not to accept any articles from your cesspool any more.

The aim must be the end of “bulletproof” hosters. There have been prior successes in this area, but we can certainly do better on a global scale.

Don’t spare abused systems

Instead of renting infrastructure from bulletproof hosting outfits, the criminals often hack into an unrelated organisation and then abuse its systems to stage attacks from. Abused systems range from simple C2 proxies on compromised websites, DDoS-amplification, accounts for sending spam-mails to elaborate networks of proxies on compromised CPEs.

These days, we politely warn the owners of the abused devices and ask them nicely to clean up their infrastructure.

We treat them as victims, and not as accomplices.

Maybe we need to adjust that approach.

Mutual assured cyber destruction

As bad as the cold war was, the concept of mutual assured destruction managed to deter the use of nuclear weapons for over 70 years. Right now, there is no functioning deterrence on the Internet.

I can’t say what we need to do here, but we must create a significant barrier to the employment of cyberattacks. Right now, most offensive cyber activities are considered “trivial offences”, maybe worth a few sternly worded statements, but nothing more. The EU Cyber Diplomacy Toolbox is a step in that direction, but is still rather harmless in its impact.

We can and should do more.

Broken Window Theory

From Wikipedia: “In criminology, the broken windows theory states that visible signs of crime, antisocial behavior, and civil disorder create an urban environment that encourages further crime and disorder, including serious crimes.”

To put this bluntly: As we haven’t managed to solve the Spam E-mail problem, why do we think we can tackle the really serious crimes?

Thus, one possible approach is to set aside some investigative resources in the law enforcement community to go after the low-level, but very visible criminals. Take for example the long running spam waves promoting ED pills. Tracking the spam source might be hard, but there is a clear money trail on the payment side. This should be an eminently solvable problem. Track those gangs down, make an example out of them and let every other criminal guess where the big LE guns will be pointing at next.

As a side effect, the criminal infrastructure providers who support both the low level and the more serious cybercrime might also feel the heat.

Offer substantial bounties

We always say that ransomware payments are fuelling the scourge. They provide RaaS gangs with fresh capital to expand their operations and it is a great incentive for further activities in that direction.

So, what about the following: decree by law that if you’re paying a ransom, then you have to pay 10% of the ransom into a bounty fund that incites operators in the ransomware gangs to turn in their accomplices.

Placing bounties on the head of criminals is a very old idea and has proven to be effective to create distrust and betrayal in criminal organisations.

Liability of Service Providers

Criminals are routinely abusing the services offered by legitimate companies to further their misdeeds. Right now, the legal environment is shielding the companies whose services are abuse, from direct liability regarding the action of their customers.

Yes, this liability is usually not absolute, often there is a “knowingly” or “repeatedly” or “right to respond to allegations” in the law that absolve the service providers to proactively search for or quickly react to reports of illegal activities originating from their customers.

We certainly can have a second look at these provisions.

Not all service providers should be treated the same way, a small ISP offering to hosts websites has vastly smaller resources to deal with abuse that the hyper-scalers with billions of Euros stock market valuations. The impact of abuse scales about the same way: a systematic problem at Google is much more relevant than anything a small regional ISP can cause.

Spending the same few percentage points of their respective revenue on countering abuse can give the abuse handling teams of big operators the necessary punch to really be on top of abuse at their platform and do it 24×7 in real-time.

We need to incentivise all actors to take care of the issue.

Search Engine Liability

By using SEO techniques or via simply buying relevant advertisement slots, criminals sometimes manage to lure people looking for legitimate free downloads to fake download sites that offer backdoored versions of the programs that the user is looking for.

Given the fact that this is a very lucrative market for search engine operators, there should be no shortage on resources to deal with this abuse either proactively or in near real time when they are reported.

And I really mean near real-time. Given e.g., Google’s search engine revenue, it is certainly possible to resolve routine complaints within 30 minutes, on a 24×7 coverage. If they are not able to do it, make them both liable for damages caused by their inaction and impose regulatory fines on them.

For smaller companies, the response time requirements can be scaled down to levels that even a mom & pop ISP can handle.

Content Delivery Network liability

The same applies to content delivery networks: such CDNs are often abused to shield criminal activities. By hiding behind a CDN, it becomes harder to take down the content at the source, it becomes tricky to just firewall off the sewers of the Internet and even simple defensive measures like blocking JavaScript execution by domain are disrupted if the CDN serves scripts from their domains.

Cloudflare boasts that a significant share of all websites is now served using their infrastructure. Still, they only commit to a 24h reaction time on abuse complaints for things like investment fraud.

With great market-share comes great responsibility.

We really need to forcibly re-adjust their priorities. It might be a feel-good move for libertarians to enable free speech, and sometimes controversial content really needs protection. But Cloudflare is acting like a polluter who doesn’t really care what damage their actions cause on others.

Even in the libertarian heaven, good behaviour is triggered by internalizing costs by making liabilities explicit.

Webhoster liability

The same applies to the actual hosters of malicious content. In the western world, we need to give webhosters a size-dependent deadline for reacting to abuse-reports. For the countries who do not manage to create and enforce similar laws, the rest of the world need to react by limiting the reachability of non-conforming hosters.

Keeping the IT market healthy

Market monopolies are bad for security. They create a uniform global attack surface and distort the security incentives. This applies both to the software, the hardware/firmware side, the cloud as well as to the ISP ecosystem.

What can the military do?

In the physical word, the military is the ultimate deterrence against nation state transgressions. This is really hard to translate to cyber-security. I mentioned MAD above. This is really tricky: what is the proper way of retaliation? How do we avoid a dangerous escalation of hack, hack-back and hack-back-back?

Or should we relish in the escalation? A colleague recently mentioned that some ransomware gang claimed to have hacked the US Federal Reserve and is threatening to publish terabytes of stolen data. I half joked by replying with “If I were them, I’d start to worry about a kinetic response by the US.”

There are precedents. Some countries are well known to react violently if someone decides to take one of their citizens as hostage. No negotiations. Only retribution with whatever painful means are available.

Some cyber-attacks have a similar impact as violent terrorist attacks, just look at the ripple on effect on hospitals in London following the attack on Synnovis. So why should our response portfolio against ransomware actors rule out some the options we keep open for terrorists?

Free and open vs. closed and secure

Overall, there seems to be two major design decision that have a major cyber security impact.

First, the Internet is a content-neutral, global packet-switched network, for which there is only a very limited consensus regarding the rules that its operators and users should adhere to. And there are even fewer global enforcement possibilities for the little rules that we can agree on.

On one hand, this is good. We do not want to live in a world where the standards for the Internet are set and enforced by oppressive regimes. The global reach of the Internet is also a net positive: it is good that there is a global communication network that interconnects all humans. Just as the phone network connects all countries, the global reach of the Internet has the potential to foster communication across borders and can bring humanity together. We want dissidents in Russia and China to be able to communicate with the outside world.

On the other hand, this leads to the effects described in the first section: geography has no meaning on the Internet; thus, we’re importing the shadiest locations of the Internet right into our living rooms.

We simply can’t have both: a global, content agnostic network that reaches everybody on the planet, and a global network where the behaviour that we find objectionable is consistently policed.

The real decision is thus where to compromise: On “global”, by e.g. declining to be reachable from the swamps of the Internet, or on “security”: live with the dangers that arise from this global connectivity.

The important part here is: this is a decision we need to take. Individually, as organisation and, perhaps, as a country.

We face a similar dilemma with our computing infrastructure: The concept of the generic computer, the open operating systems, the freedom to install third-party programs and the availability of accessible programming frameworks plus a wealth of scripting languages are essential for the speed of innovation. A closed computing environment can never be as vibrant and successful.

The ability to run arbitrary new code is both a boon for innovation, but also creates the danger of malicious code being injected into our system. Retrofitting more control here (application allowlisting, signed applications, strong application isolation, walled garden app-stores, …) can mitigate some of the issues, but will never reach the security properties of system that was designed to run exactly one application and doesn’t even contain the foundations for running additional code.

Again, there is a choice we need to make: do we prefer open systems with all their dangers, or do we try to nail things down to lower the risks? This does not need to be a global choice: we should probably choose the proper flexibility vs. security setting depending on intended use of an IT system. A developer’s box needs not have the same setting as a tablet for a nursing home resident.

Technical solutions – just don’t be easily hackable?

In an ideal world, our IT systems would be perfectly secure and would not be easy pray for cyber-criminals and nation state actors. Yes, any progress in securing our infrastructure is welcome, but we cannot simply rely on this path. Nevertheless, there are a few low hanging fruits we need to take:

Default configurations: Networked devices need to come with defaults that are reasonably secure. Don’t expect users to go through all configuration settings to secure a product that they bought. This can be handled via regulation.

Product liability is also an interesting approach. This is not trivial to get right, but certain classes of security issues are so basic that failing to protect against them amounts to gross negligence in 2024. For example, we recently saw several path traversal vulnerabilities in edge-devices sold in 2024 by security companies with more than a billion-dollar market cap. Sorry, such bugs should not happen in this league.

The Cyber Resilience Act is an attempt to address this issue. I have no clue whether it will actually work out well.

While I hope that we will manage to better design and operate our critical IT infrastructure in the future, this is not the part where I’d put my money on. We’ve been chasing that goal for the last 25 years and it hasn’t been working out so great.

We really need to start thinking outside the box.

Categories
CERT Pet Peeves

On Cybersecurity Alert Levels

Last week I was invited to provide some input to a tabletop exercise for city-level crisis managers on cyber security risks and the role of CSIRTs. The organizers brought a color-coded threat-level sheet (based on the CISA Alert Levels) to the discussion and asked whether we also do color-coded alerts in Austria and what I think of these systems.

My answer was negative on both questions, and I think it might be useful if I explain my rationale here. The first was rather obvious and easy to explain, the second one needed a bit of thinking to be sure why my initial intuition to the document was so negative.

Escalation Ratchet

The first problem with color-coded threat levels is their tendency to be a one-way escalation ratchet: easy to escalate, but hard to de-escalate. I’ve been hit by that mechanism before during a real-world incident and that led me to be wary of that effect. Basically, the person who raises the alert takes very little risk: if something bad happens, she did the right thing, and if the danger doesn’t materialize, then “better safe than sorry” is proclaimed, and everyone is happy, nevertheless. In other words, raising the threat level is a safe decision.


On the other hand, lowering the threat level is an inherently risky decision: If nothing bad happens afterwards, there might be some “thank you” notes, but if the threat materializes, then the blame falls squarely on the shoulders of the person who gave the signal that the danger was over. Thus, in a CYA-dominated environment like public service, it is not a good career move to greenlight a de-escalation.


We’ve seen this process play out in the non-cyber world over the last years, examples include

  • Terror threat level after 9/11
  • Border controls in the Schengen zone after the migration wave of 2015
  • Coming down from the pandemic emergency

That’s why I’ve been always pushing for clear de-escalation rules to be in place whenever we do raise the alarm level.

Cost of escalation

For threat levels to make sense, any level above “green” need to have a clear indication what the recipient of the warnings should be doing at this threat level. In the example I saw, there was a lot of “Identify and patch vulnerable systems”. Well, Doh! This is what you should be doing at level green, too.


Thus, relevant guidance at higher level needs to be more than “protect your systems and prepare for attacks”. That’s a standing order for anyone doing IT operation, this is useless advice as escalation. What people need to know is what costs they should be willing to pay for a better preparation against incidents.


This could be a simple thing like “We expect a patch for a relevant system to be released out of our office-hours, we need to have a team on standby to react as quickly as possible, and we’ve willing to pay for the overtime work to have the patch deployed ASAP.”. Or the advice could be “You need to patch this outside your regular patching cadence, plan for a business disruption and/or night shifts for the IT people.” At the extreme end, it might even be “we’re taking service X out of production, the changes to the risk equation mean that its benefits can’t justify the increased risks anymore.”.


To summarize: if there were no hard costs to a preventative security measure, then you should have implemented them a long time ago, regardless of any threat level board.

Counterpoint

There is definitely value in categorizing a specific incident or vulnerability in some sort of threat level scheme: A particularly bad patch day, or some out-of-band patch release by an important vendor certainly is a good reason that the response to the threat should also be more than business-as-usual.


But a generic threat level increase without concrete vulnerabilities listed or TTPs to guard against? That’s just a fancy way of saying “be afraid” and there is little benefit in that.

Postscript: Just after posting this article, I stumbled on a fediverse post making almost the same argument, just with April 1st vs. the everyday flood of misinformation.

Categories
Internet Pet Peeves

Kafka wohnt in der Lassalle 9

Ich bin wohl nicht der einzige technikaffine Sohn / Schwiegersohn / Neffe, der sich um die Kommunikationstechnik der älteren Generation kümmern muss. In dieser Rolle habe ich gerade was hinreichend Absurdes erlebt.

Angefangen hat es damit, dass A1 angekündigt hat, den Telefonanschluss einer 82-jährigen Dame auf VoIP umstellen zu wollen. Das seit ewigen Zeiten dort laufende „A1 Kombi“ Produkt (POTS + ADSL) wird aufgelassen, wir müssen umstellen. Ok, das kam jetzt nicht so wirklich überraschend, in ganz Europa wird das klassische Analogtelefon Schritt für Schritt abgedreht, um endlich die alte Technik loszuwerden.

Also darf ich bei A1 anrufen, und weil die Dame doch etwas an ihrer alten Telefonnummer hängt, wird ein Umstieg auf das kleinste Paket, das auch Telefonie beinhaltet, ausgemacht. Also „A1 Internet 30“. 30/5 Mbit/s klingt ja ganz nett am Telefon, also bestellen wir den Umstieg (CO13906621) am 18.11.2023. Liest man aber die Vertragszusammenfassung, die man per Mail bekommt, so schaut das so aus:

Da die Performance der alten ADSL Leitung auch eher durchwachsen und instabil war (ja, die TASL ist lang), erwarte ich eher den minimalen Wert, was einen Faktor 2 bzw. 5 weniger ist als beworben. Das Gefühl, hier über den Tisch gezogen worden zu sein, führt zu dem Gespräch: „Brauchst du wirklich die alte Nummer? Die meisten der Bekannten im Dorf sind doch schon nur mehr per Handy erreichbar.“

Ok, dann lassen wir die Bedingung „Telefon mit alter Nummer“ sausen und nehmen das Rücktrittsrecht laut Fern- und Auswärtsgeschäfte-Gesetz in Anspruch und schauen uns nach etwas Sinnvollerem um. An sich klingt das „A1 Basis Internet 10“ für den Bedarf hier angebracht, aber wenn man hier in die Leistungsbeschreibungen schaut, dann werden hier nur „0,25/0,06 Mbit/s“ also 256 kbit/s down und 64 kbit/s up wirklich zugesagt. Meh. So wird das nichts, daher haben wir den Umstieg storniert und den alten Vertrag zum Jahresende gekündigt – was auch der angekündigte POTS-Einstellungstermin ist.

Der Rücktritt und die Kündigung wurden telefonisch angenommen und auch per Mail bestätigt.

So weit, so gut, inzwischen hängt dort ein 4g Modem mit Daten-Flatrate und VoIP-Telefon, was im Großen und Ganzen gut funktioniert.

Die nächste Aktion von A1 hatte ich aber nicht erwartet: In der Schlussabrechnung nach der Kündigung von Mitte Jänner war folgender Posten drinnen:

"Restgeld für vorzeittige Vertragsauflösung: 381 €

Und da 82-jährige manchmal nicht die besten E-Mail Leserinnen sind, ist das erst aufgefallen, als die Rechnung wirklich vom Konto eingezogen wurde.

Das „Restentgelt“ macht in mehrerer Hinsicht keinen Sinn: der „A1 Kombi“ Vertrag läuft seit mehr als 10 Jahren, und ich hatte bei der initialen Bestellung auch gefragt, ob irgendwelche Vertragsbindungen aktiv sind. Und das Ganze hat überhaupt erst angefangen, weil A1 die „A1 Kombi“ einstellt, aber jetzt wollen sie uns genau dieses aufgelassene Produkt bis Ende 2025 weiterverrechnen.

Also ruf ich bei der A1 Hotline an, in der Annahme, dass man dieses Missverständnis schnell aufklären kann, wahrscheinlich hat einfach das Storno des Umstiegs den Startzeitpunkt des Vertrags im System neu gesetzt. So kann man sich täuschen:

  • Per Telefon geht bei ex-Kunden rein gar nichts mehr. Der Typ an der Hotline hat komplett verweigert, sich die Rechnung auch nur anzusehen.
  • Man muss den Rechnungseinspruch schriftlich einbringen. Auf die Frage nach der richtigen E-Mail-Adresse dafür war die Antwort „Das geht nur über den Chatbot.“
  • Also sagte ich der „Kara“ so lange, dass mir ihre Antworten nicht weiterhelfen, bis ich einen Menschen dranbekomme, dem ich dann per Upload den schriftlichen Einspruch übermittle.
  • Nach Rückfrage bei der RTR-Schlichtungsstelle haben wir den Einspruch auch noch schriftlich per Einschreiben geschickt.

Wir haben hier ein Problem.

Ein Konzern zieht einer Pensionistin 400+ EUR vom Konto ein, weil sie einen Fehler in ihrer Verrechnung haben, und verweigern am Telefon komplett, sich das auch nur anzusehen. Laut RTR haben sie 4 Wochen Zeit, auf die schriftliche Beschwerde zu reagieren.

Ja, wir könnten bei der Bank den Einzug Rückabwickeln lassen, aber da ist dann A1 (laut Bank) schnell beim KSV und die Scherereien wollen wir auch nicht. Sammelklagen gibt es in Österreich nicht wirklich. Ratet mal, wer da dagegen Lobbying macht. Schadenersatz für solche Fehler? Fehlanzeige.

So sehr das US Recht oft idiotisch ist, die Drohung von hohen „punitive damages“ geht mir wirklich ab. Wo ist die Feedbackschleife, dass die großen Firmen nicht komplett zur Service-Wüste werden?

Wenn ich aus Versehen bei einer Garderobe den falschen Mantel mitnehme, und dem echten Eigentümer, der mich darauf anspricht nur ein „red mit meinem chatbot oder schick mir einen Brief, in 4 Wochen kriegst du einen Antwort“ entgegne, dann werde ich ein Problem mit dem Strafrecht bekommen.

Wie lösen wir sowas in Österreich? Man spielt das über sein Netzwerk. Mal sehen, wie lange es nach diesem Blogpost (plus Verteilung des Links an die richtigen Leute) braucht, bis jemand an der richtigen Stelle sagt „das kann’s echt nicht sein, liebe Kollegen, fixt das jetzt.“.

Update 2024-02-09: Eine kleine Eskalation über den A1 Pressesprecher hat geholfen. Die 1000+ Kontakte auf LinkedIn sind dann doch zu was gut.

Update 2024-04-05: Schau ich doch mal kurz auf die A1 Homepage, und was seh ich?

Gerichtsbeschluss

Anscheinend war des dem VKI auch zu bunt, wie unseriös die A1 mit Bandbreiten geworben hat.

Categories
CERT

A classification of CTI Data feeds

(Crossposted on the CERT.at blog.)

I was recently involved in discussions about the procurement of Cyber Threat Intelligence (CTI) feeds. This lead to discussions on the nature of CTI and what to do with it. Here is how I see this topic.

One way to structure and classify CTI feeds is to look at the abstraction level at which they operate. As Wikipedia puts it: 

  • Tactical: Typically used to help identify threat actors (TAs). Indicators of compromise (such as IP addresses, Internet domains or hashes) are used and the analysis of tactics, techniques and procedures (TTP) used by cybercriminals is beginning to be deepened. Insights generated at the tactical level will help security teams predict upcoming attacks and identify them at the earliest possible stages.
  • Operational: This is the most technical level of threat intelligence. It shares hard and specific details about attacks, motivation, threat actor capabilities, and individual campaigns. Insights provided by threat intelligence experts at this level include the nature, intent, and timing of emerging threats. This type of information is more difficult to obtain and is most often collected through deep, obscure web forums that internal teams cannot access. Security and attack response teams are the ones that use this type of operational intelligence.
  • Strategic: Usually tailored to non-technical audiences, intelligence on general risks associated with cyberthreats. The goal is to deliver, in the form of white papers and reports, a detailed analysis of current and projected future risks to the business, as well as the potential consequences of threats to help leaders prioritize their responses.

With tactical CTI, there a reasonable chance that it can shared on a machine-to-machine basis with full semantic information that makes it possible to automate the processing for detection and prevention purposes. Many commercial security devices are sold with a subscription to the vendor’s own data-feeds. This ranges from simple anti-spam solutions, over filters for web proxies to rules for SIEMs.

While it is possible to encode operational CTI in standardized data exchange formats like STIX2, it is much harder for automated systems to operationalize this information. For example, what automated technical reaction is possible to “threat actor X is now using compromised CPEs in the country of its targets for C2 communication”? Yes, one can store that kind of information in OpenCTI (or a similar Threat Intelligence Platform (TIP)) and map it to the ATT@CK framework. That can be valuable for the human expert to plan defenses or to react better during incidents, but it is not detailed enough automated defense.

With strategic CTI, we are on the human management layer. This is never designed for automated processing by machines.

Types of IOCs

Focusing on the technical layer, we find that there are a number of different types of information encoded in the data feeds. One way to look at this is the Diamond Model of intrusion analysis which is built around information on adversary, capability, infrastructure, and victim. While this is a very valuable model for intrusion analysis, it is too complex for a simple categorization of CTI feeds.

I propose the following three basic types:

Type 1: Attack Surface Information

Many of the feeds from Shadowserver fall in this category. Shodan data can also be a good example. There is now a bunch of companies focusing on “cyber risk rating”, which all try to evaluate the internet-visible infrastructure of organizations.

Examples:

  • “On IP-address A.B.C.D, at time X, we detected a Microsoft Exchange server running a version that is vulnerable to CVE-202X-XXXX”.
  • “The time-server at IP addres Y can be abused as ddos-reflector.”
  • “On IP address Z, there is an unprotected MongoDB reachable from the Internet.”

Notable points are:

  • This is very specific information about a concrete system. Usually, it is very clear who is responsible for it.
  • There is no information about an actual compromise of the listed system. The system might be untouched or there may already be a number of webshells deployed on it.
  • There is no information about an attacker.
  • This is sensitive (potentially even GDPR-relevant) information.
  • This information is (almost) useless to anybody but the owners of the system.
  • Thus, the coordinating CSIRT should pass this information on to the maintainers of this system and to nobody else. CERT.at is usually tagging these events with “vulnerable / vulnerable system” or “vulnerable / potentially unwanted accessible service”.

Expected response from system owner: 

  • Mitigate the threat by patching / upgrading / removing the system or maybe even accept the risk (e.g. “yes, we really want to run telnet enable on that server”).
  • Verify that the system has not been breached yet.

Type 2: Threat Actor IOCs

This is the opposite: the information is solely about the threat actor and the resources this group is using, but there is no clear information on the targets. Typical information contained in these IOCs is:

  • The domain-name of a command & control (C2) server of the TA
  • An IP address of a C2 server
  • Filename and/or hash of malware used by the TA
  • Email subject, sender and sending IP address of a phishing mail
  • Mutex names, registry-keys or similar artefacts of an infection
  • URL-pattern of C2 connections

Example:

  • A RAT Remcos campaign was detected 2023-06-14 to use
    • Mutex: Rmc-MQTCB0URI: /json.gp
    • Email-attachment: Shipment_order83736383_document_file9387339.7z
    • MD5: 2832aa7272b0e578cd4eda5b9a1f1b12
    • Filename: Shipment_order837363.exe

Notable points are:

  • This is detailed information about a threat actor infrastructure, tools and procedures.
  • There is often no information about targets of these attacks. Sometimes, some targeting information is known, like “This TA usually attacks high-tech companies”.
  • This information is potentially useful for everybody who that actor might target.
  • Unless one thinks that attacker IP-addresses deserve GDPR-protection, this data has no privacy implication.
  • Thus, the coordinating CSIRT should pass this information on to all constituents who are capable of operationalizing such CTI. CERT.at is usually not sending this kind of information pro-actively to all constituents, instead we operate a MISP instance which holds these IOCs. Security automation on the side of the constituent is welcome to use the MISP APIs to fetch and process the IOCs.
  • If the targeting of the TA is sufficiently well known and specific, CERT.at will pass on the IOCs directly to the constituent’s security team.
  • In rare cases, the TA is abusing infrastructure of one of our constituents. In that case, we have a mix with the next type of CTI.

Expected response from system owner:

  • Add the IOCs to any sort of incident prevention system, e.g., filter lists in proxies, EDR or AV software.
  • Add the IOCs to the incident detection system, e.g., create suitable rules in SIEMs.
  • Ideally, also perform a search in old logs for the newly acquired IOCs.

Type 3: Infection data

Sometimes we receive cyber threat information that is very specific and concerns a live incident inside a constituent’s network.

Examples are:

  • “We detected at timestamp X a webshell placed on a Citrix server. IP-address = A.B.C.D, path = /logon/LogonPoint/uiareas/mac/vkb.php”
  • “Our darknet monitoring detected that someone is selling VPN credentials for user@example.com on the platform X”
  • “After a takedown of botnet X we are monitoring botnet drone connections to the former C2 servers. On [timestamp], the IP address A.B.C.D connected to our sinkhole.”
  • “We managed to get access to the infrastructure of threat actor X. According to the data we found there, your constituent Y is compromised.”
  • “Please find below information on IPs geolocated in your country which are most likely hosting a system infected by SystemBC malware. […] Timestamp, IP-address, hostname, c2 ip-address”
  • “There are signs of malicious manipulations on the Website of domain X, there is a phishing page at /images/ino/95788910935578/login.php”

Notable points are:

  • This is usually very specific information about a live incident involving a concrete system.
  • In the best case, the information is good enough to trigger a successful investigation and remediation.
  • The threat actor is often, but not always, named.
  • This is sensitive (potentially even GDPR-relevant) information.
  • This information is (almost) useless to anybody but the owners of the system.
  • This information can be very time-sensitive: a quick reaction can sometimes prevent a ransomware incident.
  • Thus, the coordinating CSIRT should pass this information quickly on to the maintainers of this system and to nobody else. CERT.at is usually tagging these events with “intrusions / system-compromise” or “fraud / phishing”

Expected response from system owner:

  • Start the local incident response process.
  • Clean up the known infection and investigate the possibility of additional compromises in the affected network (lateral movement?).

Tooling

CERT.at is using IntelMQ to process feeds of type 1 and 3. CTI feeds of type 2 are handled by our MISP installation.

Categories
CERT Internet

Boeing vs. Microsoft

2019: Boeing uses a single “angle of attack” sensor to control the automatic pitch control software (MCAS). A software upgrade to use both AOA sensors was sold as an optional extra. Result: two planes crashed, killing 346 people. The damages and the subsequent grounding of the whole 373 Max fleet cost Boeing a few billion dollars.

2023: Microsoft sold access to advanced logs in their MS-365 service as an add-on, premium product. Luckily, someone did buy it, and used the data provided to detect a significant intrusion in the whole Azure infrastructure.

For good or bad, we do not treat security as seriously as safety.

Yet.

I think this will change for critical infrastructure.

Microsoft should think hard what that would mean. If one can ground a fleet of planes for safety reasons, I see no fundamental reason not to temporarily shut down a cloud service until it can prove that it solved a design flaw.

We just need a regulatory authority with a bit more desire to be taken seriously.

Categories
CERT

A network of SOCs?

Crossposted on the CERT.at blog.

Preface

I wrote most of this text quickly in January 2021 when the European Commission asked me to apply my lessons learned from the CSIRTs Network to a potential European Network of SOCs. During 2022, the plans for SOC collaboration have been toned down a bit, the DIGITAL Europe funding scheme proposes multiple platforms where SOCs can work together. In 2023, the newly proposed “Cyber Solidarity Act” builds upon this and codifies the concept of a “national SOC” and “cross-border SOC platforms” into an EU regulation.

At the CSIRTs Network Meeting in Stockholm in June 2023 I gave a presentation on  the strenghts and flaws in the CSoA approach. A position paper / blog-post on that is in the works.

The original text (with minor edits) starts below.

Context

The NIS Directive established the CSIRTs Network (CNW) in 2016, and the EU Cybersecurity Strategy from 2020 tries to do something similar for SOCs (Security Operation Centres).

I was asked by DG-CNECT to provide some lessons identified from the CWN that might be applicable for the SOC Network (SNW).

The following points are not a fully fleshed out whitepaper, instead they are a number of propositions with short explanations.

The most important point is that one cannot just focus on the technical aspects of SOC collaboration. That is the easy part. We know which tools work. The same stack that we developed for the CSIRTs Network can almost 1:1 support SOC networks.

Our colleagues from CCN-CERT presented the Spanish SOC Network at various meetings recently. Yes, there was one slide with their MISP setup, but the main content was the administrative side and the incentive structure they built to encourage active participation by all members.

Human Element

Trust

Any close cooperation needs a basic level of trust between participants. The more sensitive the topic and the more damage could potentially be done by the misuse of information shared between the organisations, the more trust is needed for effective collaboration.

There must be an understanding that one can rely on others to keep secrets, and to actually communicate if something important for the partner is learned.

Trust is not binary

Trust is not a binary thing: There is more than “I trust” or “I don’t trust”; it always depends on the concrete case if you trust someone enough to cooperate in this instance.

Trust needs Time

Some basic level of trust is given to others based on their position (e.g., I trust the baker to sell me edible bread; I trust every police officer to do the basics correctly), but only repeated interactions with the same person/organisation increases the trust over time. (See “The Evolution of Cooperation”)

Thus, one needs to give all these networks time to establish themselves and the trust relationships.

These things really take time. We are talking about years.

Physical meetings (incl. social events) help

Bringing people together is very helpful to bootstrap cooperation.

You can’t legislate Trust

There are limited possibilities to declare ex cathedra that one has to trust someone. It might work do certain degree if people are forced by external events to collaborate (e.g., call the police if you have to deal with a significant crime; or reporting requirements to authorities; or hand your kids over to day-care/school/ …).

Even in these cases, these organisations have to be very careful about their reputation: misuse of their trust positions will significantly affect how much trust is given, even under duress.

Persons or Teams

Trust can be either anchored to persons or to organisations. I might trust a certain barber shop to get my haircut right, but I’ll prefer to go the same person if the cutter got it right the last time.

Experience has shown that is possible to establish institutional trust: If I know that Team X is competently run, then I will not hesitate to use the formal contact point of that team.

Still: if something is really sensitive, I will try to reach the buddy working for that other team with whom I have bonded over beer and common incidents.

Group Size

Close cooperation in groups cannot be sustained if the number of participants increases beyond a certain limit. This has been observed in multiple fora, amongst them FIRST, TF-CSIRT, and ops-t (which was actually an experiment in scaling trust groups).

As a rule of thumb: whenever you cannot have every member of the group present their current work/topics/ideas/issues during a meeting, then the willingness to have an open sharing decreases significantly. This puts the limit at about 15 to 20 participants. If lower levels of cooperation are acceptable, then group sizes can be larger.

Corollary: Group Splits

If a group becomes too big, then there is a chance that core members will split off and create a new, smaller forum for more intense collaboration.

This is similar to what happens with groups of animals: if one pack becomes too big to be viable, it will split up.

Adding members

Organic growth from within the group works best.

An external membership process (as in the CNW, where existing members have no say over the inclusion of a new team from another EU Member State) can be very detrimental to the trust inside the group.

Motivation

Cost

Any level of participation in a network of peers is not free of costs. Nobody in this business has spare time for anything. Even just passive participation via the odd telephone conference or even just reading emails costs time and is thus not free.

Active participation, be it travelling to conferences, working on common projects, manually forwarding information, or setting up Machine to Machine (M2M) communication can carry significant costs. These must not exceed the benefits from the participation in the network.

Corollary: Separate tooling is detrimental to sharing

Sharing information into a network must be as low-friction as possible. If an analyst has to re-enter information about an incident in a different interface to share the data, then the chance is high that it will not happen. Optimally, the sharing option is built into the core systems and the overhead of sharing is just selecting with whom.

Benefits

The flip side is often not so easy to quantify: what are the concrete benefits of collaboration? If the bean-counters ask to justify the cost, there should be clear business reasons why the costs are worth it. “Interesting discussions” and “being a good corporate citizen” is not a long-term sustainable motivation.

It must be as clear as possible what value each participant will get from such a network.

Beware of freeloaders and the “Tragedy of the Commons” effect.

Peers

Networks work best between organisations that are comparable in size, their jobs, and their position in the market. Their technology and informational needs should be roughly the same. They should face similar tasks and challenges. For example, the SOC of VW and the SOC of Renault should have roughly the same job and thus an exchange of experiences and data might be mutually beneficial.

Vendor/customer mix can kill networks

If two members of a network are actually in a vendor/customer relationship in terms of cyber security, then this is a strong detriment to collaboration. Even just a potential sale is tricky: if one member is describing his problem, then someone else should not be in the position to offer his own commercial product of service to address that problem.

I have seen this work only if the representative of the vendor can clearly differentiate between his role as network partner and his pre-sales job. This is the exception, not the rule.

Competition (1)

Ideally, the member of the network should be in no competition to one another. Example: the security team of Vienna’s city hospitals and the equivalent team of the Berlin Charité are a best case: their hosting organisations are working in the same sector, but there is absolutely no competition for customers between those two.

If the hosting organisations are actually competing with each other (see the VW vs. Renault example from above, or different banks), then a cooperation on IT security is not a given. Nevertheless, it is also not impossible, as competitors are often collaborating with respect to lobbying, standardization or interconnection. One positive example I have seen are the Austrian banks, who are cooperating about e-banking security based on the premise that the customers will not differentiate between “e-banking at Bank X is insecure” and “e-banking is insecure”.

Competition (2)

Even trickier is the case of SOCs not just protecting the infrastructure of their respective hosting organisation, but also offering their services on a commercial basis to any customer (“SOC outsourcing”). Anything one SOC shares with the network then potentially helps a direct competitor. Example: both Deloitte and Cap Gemini offer SOC outsourcing and Threat Intel reporting. Their knowledge base is their competitive advantage and why should they share this freely with a competitor when they are selling the same information to a customer?

Such constellations are extremely difficult, but not impossible to manage.

The trick to deal with competition in such networks is to move the collaboration to a purely operational / technical layer. These people are used to deal with their peers in a productive way.

Alignment of interest

This all boils down to

  • Is it a good commercial decision for my SOC to participate in the network?
  • Is it a good commercial decision to share data into the network?

Resources

All members must make the clear management decision to participate in such network and must allocate human power to it. In some way, such networks operate a bit like amateur sport clubs or open-source projects: they thrive based on the voluntary work done by their members. I have seen too many cases where such networks fail simply because members lost interest and did not invest time and effort in running them effectively.

Running a network

Secretariat

While not strictly necessary, a paid back-office increases the chances of success significantly. Someone has to organize meetings, write minutes, keeps tracks on memberships, produces reports, and provides an external point of contact.

Doing this on a voluntary basis might work for very small and static networks, where a round-robin chair role can succeed.

Connecting people

Bringing people together is the basis foundation of a collaboration network. Only in the case where the network is only the distribution of information from a handful of central sources to all members (i.e., a one-way information flow), then this might not be needed. This can be done by (in order of importance)

  • Physical meetings (conferences, workshops, …)
  • Continuous low-friction instant messaging
  • Mailing-lists
  • Web-Forums

Generic central tooling

Any network, regardless of topic, needs a few central tooling components:

  • A directory of members (preferably with self-service editing)
  • A file repository
  • An administrative mailing-list
  • A topical mailing-list
  • An instant messaging facility

A decent Identity and Access management covering all these tools is recommended (but not strictly necessary in the first iteration). The toolset created for the CSIRTs Network (MeliCERTes 2) can help here.

Exchange of Information

In the end, the main motivation of such network is information sharing with the intention of making members more effective in their core task. Here are some thoughts on that aspect:

Compatible levels of Maturity

If members are at very different levels of technological and organisational maturity, then any information exchange is of limited value. A common baseline is helpful.

Human to Human

This is the easiest information exchange to get going, and some topics really need to be covered on the human layer: people can talk about experiences, about cases, about what works and what does not.

It is also possible to exchange Cyber Threat Intelligence (CTI) between humans: the typical write-up of a detected APT campaign, including all the Indicators (IoCs) found during the incident response, is exactly that.

This sounds easy, but is costly in terms of human time. On the receiving side, the SOC needs to operationalize the information contained to make the automated systems detect a similar campaign in the local constituency.

Information Management

The way a SOC is gathering, storing, correlating and de-duplicating the CTI that is powering its detection capability is a core element in the SOC internal workflow. Its maturity in this respect drives the possibilities of collaboration on the topic of CTI.

One (not uncontroversial) theory on this topic is the “Pyramid of Pain” concept from David Bianco, where he describes the levels of abstractions in CTI. The lower levels are easy for SIEMs to detect, but also trivial for the threat actor to change. The challenge for SOCs is to operate at a higher level than what the threat actors is prepared to change frequently.

CTI M2M

In theory, SOCs should be able to cross-connect their CTI systems to profit from each other’s learnings and thus increase the overall detection capability of SOC Network. Regrettably, this is non-trivial on multiple fronts:

Data protection / customer privacy

They must be ensure that no information about the customer where the CTI was found during IR, leaks out. Sometimes this is easy and trivial sometimes it is not. Thus, unless the SOC is very mature at entering CTI into their system, people will want to check manually what is being shared.

Data licencing

Many SOCs buy CTI data from commercial sources. Such data needs to be excluded from automatic data sharing.

Data compatibility

While there are a number of standards for CTI data exchange (e.g., STIXX/TAXII, MISP or Sigma rules), this is far from being a settled topic. Especially if you want to move up in the pyramid of pain.

Sharing tools

In addition to sharing information, it is also possible that members of the network share the tools they have written to perform various aspects of a SOCs job.

Categories
Austria Politics

Ein paar Thesen zu aktuellen Gesetzesentwürfen

(Diesen Blogpost habe ich 2017 für den CERT.at Blog geschrieben. Das hier ist quasi eine Sicherheitskopie.)

Das Thema “LE going dark in the age of encrytion” kocht mal wieder hoch, und noch schnell vor den Neuwahlen wurden entsprechende Gesetzesentwürfe eingebracht. Ich will hier aus technischer Sicht ein paar Argumente in die Diskussion einwerfen, beschränke mich hier aber rein auf den Aspekt Ãœberwachung trotz Verschlüsselung. Die anderen Punkte (Registrierung und Zugriff auf Kameras, Zugriff auf Daten der ISPs, …) sind zwar auch interessant, treffen aber nicht so sehr in das Kerngeschäft von CERTs, ich werde sie daher nicht mit dem CERT.at Hut kommentieren.

Links:

Wir leben in einer vernetzten Welt, ein lokaler Blickwinkel reicht nicht.

Echte End-to-End Verschlüsselungen in internetbasierten Kommunikationsplattformen (Messenger, VoIP, …) verhindern, dass deren Betreiber Auskunft über Kommunikationsinhalte ihrer Nutzer geben können. Gut gemachte Verschlüsselung für Harddisks und Handys heißt das gleiche: der Hersteller kann die Devices seiner Kunden nicht einfach entsperren.

Natürlich wäre es für das FBI und andere Polizeibehörden im Westen sehr hilfreich, wenn dem nicht so wäre, und sie immer auf die Mithilfe von Apple, Microsoft, Google, Facebook & co bauen könnten, um an die Daten und die Kommunikation von Verdächtigen heranzukommen. Das hat nur zwei ernsthaft böse Konsequenzen:

  • Die gleichen Möglichkeiten gelten dann auch für die Polizei von fast allen Staaten dieser Welt. Man mag dem Rechtsschutz in Österreich noch vertrauen, aber es gibt genug Staaten (und weit weg von unserer Insel der Seligen müssen wir da gar nicht), in denen solche Werkzeuge in den Händen des Staates für Repression, Wirtschaftsspionage und Korruption verwendet werden.
  • Es ist für die betroffenen Anbieter ein massiver Imageschaden, wenn sie hier mitspielen. Das kann sich vielleicht gerade noch ein Monopolist erlauben, Firmen die im freien, global Markt operieren, können das nur ablehnen.

In anderen Worten: wir können nicht verhindern, dass das gleiche Werkzeug, mit dem eine österreichische Firma mit ihren Außenstellen und Mitarbeitern im Ausland geschützt kommunizierten kann, auch von einem Übeltäter bei uns benutzt wird, um der Strafverfolgung vermeintlich zu entkommen.

Das Rad der Zeit kann man nicht zurückdrehen

Das Internet basiert auf der Idee, dass das Transportnetz nichts von den Applikationen wissen muss, sondern sich rein auf den Datentransport beschränkten kann. Damit wurde eine Innovationskraft freigesetzt, die andere Ansätze (erinnert sich noch jemand an die ISDN Vision aus den 80ern?) komplett überrollt hat.

Ganz ähnliches ist bei PCs und Handys passiert: Erst mit der Möglichkeit, fast beliebige Programme auf diesen Computern laufen zu lassen, wurden sie erst wirklich interessant und so wurden sie zu Plattformen für vielfältigste Anwendungen und Nutzungen.

Wir leben in einer Welt, in der “breit verfügbare, generische Kommunikationsmittel” und “billige, mächtige, frei programmierbare Computer” eine Tatsache sind. Und auch wenn diese Idee im Zeitalter von App Stores (Apple, Windows 10 S), Staatlichen Firewalls (China) oder VPN-Verboten (aktuell Russland) etwas unter Druck kommt, glaube ich nicht, dass das Rad der Zeit hier komplett zurückgedreht werden kann.

Über das Internet kann man beliebige Kommunikation tunneln, das Netz selber kann hier nicht regulierend eingreifen. Und welche Programme auf unseren Endgeräten laufen, können wir in weitem Rahmen selbst bestimmen. Software zur Kommunikation von staatlicher Seite zu verbieten, hat sich in westlichen Demokratien bis jetzt als undurchsetzbar herausgestellt.

Auch das Wissen um Kryptografie (Verschlüsselung) ist inzwischen so breit gestreut und die Algorithmen sind überall über einfache APIs ansprechbar, dass auch die Verfügbarkeit von Kyptografie nicht mehr umkehrbar ist.

Kryptografie schwächen heißt Sicherheit schwächen

Bei der Einführung von Transportverschlüsselung (SSL/TLS) hat die US Regierung eine Zeit lange versucht, nur den Export von abgeschwächten Versionen zu erlauben. Das hat sich als Bumerang erwiesen, wie die Forscher schreiben, die die “DROWN” Lücke gefunden haben:

For the third time in a year, a major Internet security vulnerability has resulted from the way cryptography was weakened by U.S. government policies that restricted exporting strong cryptography until the late 1990s. Although these restrictions, evidently designed to make it easier for NSA to decrypt the communication of people abroad, were relaxed nearly 20 years ago, the weakened cryptography remains in the protocol specifications and continues to be supported by many servers today, adding complexity—and the potential for catastrophic failure—to some of the Internet’s most important security features.

The U.S. government deliberately weakened three kinds of cryptographic primitives: RSA encryption, Diffie-Hellman key exchange, and symmetric ciphers. FREAK exploited export-grade RSA, and Logjam exploited export-grade Diffie-Hellman. Now, DROWN exploits export-grade symmetric ciphers, demonstrating that all three kinds of deliberately weakened crypto have come to put the security of the Internet at risk decades later.

Today, some policy makers are calling for new restrictions on the design of cryptography in order to prevent law enforcement from “going dark.” While we believe that advocates of such backdoors are acting out of a good faith desire to protect their countries, history’s technical lesson is clear: weakening cryptography carries enormous risk to all of our security.

Die Suche nach “NOBUS” (Nobody but us) Schwachstellen/Hintertüren in Verschlüsselungsverfahren hat sich in weitem Bereich als Suche nach dem heiligen Gral herausgestellt. Das klingt gut, mag auch eine Zeit funktionieren, aber dann explodiert das mit hohem Schaden. Ein plakatives Beispiel dafür sind die “TSA” Schlösser für Koffer, für die es bekannte Zweitschlüssel gibt. Es hat nicht lange gedauert, bis diese als 3D-Druckvorlagen im Netz auftauchten.

Daher: wenn man sichere Verschlüsselung haben will, auf die man sich verlassen können muss, dann kann man im Design der Software und der Algorithmen nicht schon absichtlich Hintertüren einbauen.

Kryptografie ist kein Safe

Manchmal wird die Verschlüsselung von Daten mit dem Versperren von Akten in Safes verglichen. Es gibt hier aber einen wichtigen Unterschied:

Für die Sicherheit von Safes wird bewertet, wie lange ein Angreifer unter gewissen Randbedingungen (Werkzeug, Lärmentwicklung, …) braucht, um den Tresor aufzubrechen. Dass der Safe geknackt werden kann, steht außer Zweifel, es geht immer nur um “wie lange mit welchen Mitteln”.

Bei Kryptografie ist es nur theoretisch gleich: mit genug Einsatz von roher Rechenleistung lassen sich die gängigen Verfahren knacken, nur ist bei aktuellen Algorithmen das “genug” schlicht außerhalb des derzeit technisch Machbaren. (OTPs und Quantencomputer lasse ich hier außen vor.)

In der klassische IT ist Security ist noch schwieriger zu erreichen als in der Kryptografie, aber auch ein gesperrtes iPhone lässt sich nicht einfach nur mit roher Gewalt öffnen.

Was heißt das für die Polizei und Gerichte?

Das Gewaltmonopol ist hier kein Joker, der immer sticht. Der Staat kann hier seinen Willen nicht immer hart durchsetzen. Ein österreichisches Bundesgesetz zieht gegen die Gesetze der Mathematik den Kürzeren.

Fähigkeiten von Malware / Kontrolle der Verwendung

Wenn wir vor Ort bei einer Vorfallsbehandlung helfen, kommt sehr oft die Frage, was denn die gefunden Schadsoftware alles hätte machen können. Mit viel Glück kann man manchmal beantworten, was gemacht wurde, aber die Frage der Möglichkeiten endet meistens mit “Sie konnte auch Programme nachladen und ausführen”. Aus der Sicht des Angreifers ist das auch sehr verständlich: das hält den Code klein und den Handlungsspielraum groß. So können gezielt Werkzeuge und Funktionalitäten nachgeladen werden.

Aus rein technischer Sicht besteht kaum ein Unterschied zwischen dem Vorgehen einer Tätergruppe, die spionieren will und der Polizei, die mittels einer eingebrachten Software eine (vielleicht bald) legitimierte Kommunikationsüberwachung eines Verdächtigen durchführt.

Auch hier muss das Tool fast zwangsläufig generisch sein, es muss es dem Polizisten ermöglichen, auf den vorgefundenen Messenger zu reagieren und entsprechende Module nachzuladen. Ich halte daher Aussagen der Form “Die Software kann exakt nur das, was rechtlich erlaubt ist” für technisch nicht haltbar.

Damit haben wir aber ein Audit-Problem. Wie stellen wir sicher, dass das wirklich alles gesetzeskonform abläuft und nicht ein übermotivierter Beamte alle seine technischen Möglichkeiten im Sinne seines Ermittlungsauftrages ausreizt? Hier braucht es dann deutlich mehr als nur einen Juristen als Rechtsschutzbeauftragte, es braucht eine entsprechende Protokollierung und technisch geschulte Kontrolleure.

Das sehe ich im aktuellen Entwurf nicht.

Mobiltelefone sind wie Wohnungen

Mit gutem Grund legt der Gesetzgeber einen anderen Maßstab bei der Durchsuchung einer Wohnung im Vergleich zu der eines Autos an. Die Verletzung der Privatsphäre der eigenen vier Wände ist so signifikant, dass es eine wirklich starke Argumentation seitens des Staates geben muss, um das zu erlauben.

Es gibt global gesehen die ersten Richtersprüche die festhalten, dass die Daten, die heute in Smartphones gespeichert sind, so intimer Natur sein können (Kommunikationsarchiv, Bilder, Videos, Adressbücher, Terminkalender, Social Media, …) dass für eine polizeiliche Durchsuchung die gleichen Maßstäbe anzulegen sind, wie für Wohnungen.

Der aktuelle Vorschlag nimmt das auf, indem er nur auf Kommunikationsüberwachung, nicht aber auf Durchsuchung abzielt. Ich halte das für in der Praxis nicht so schön trennbar, wie sich das die Juristen vorstellen.

Implantieren der Ãœberwachungssoftware

Die Gretchenfrage ist aber, wie denn die Software der Ãœberwacher auf den PC oder das Handy des zu Ãœberwachenden kommt.

Das war früher mit Windows 95 & co noch relativ einfach: wenn man das Gerät in die Hand bekommt, kann man leicht Software installieren. Ein aktuelles Windows braucht hingegen eigentlich immer ein Passwort und ist darüber hinaus noch mit Secure Boot und BitLocker gegen Manipulationen geschützt. Bei Handys schaut es ähnlich aus, aktuelle Modelle lassen sich auch nicht so einfach mit nur einem USB-Kabel manipulieren. Dort kommt noch dazu, dass man schwerer unbeobachtet an das Gerät kommt.

Und das ist gut so. Der Schutz von sensiblen persönlichen oder Firmendaten, die auf mobilen Computern (Laptops, Smartphones, …) gespeichert sind, ist bei jeder Risikobetrachtung (und sei es nach ISO 27k) ein wichtiges Thema. Ein verlorenes Handy oder ein gestohlener Laptop darf für Firmen nicht den Verlust der darauf gespeicherten Daten bedeuten. Diese Absicherung wird den Firmen vom Staat über mehrere Gesetze auch unter Androhung von Strafen vorgeschrieben, das reicht vom DSG bis hin zum kommenden NIS-Gesetz.

Ähnliches gilt auch für die Einbringung von Ãœberwachungssoftware aus der Ferne. Das ist – aus rein technischer Sicht – einfach nur ein weiterer Advanced Persistent Threat (APT). Ob eine kriminelle Bande eine Bank infiltrieren will, um Geld zu bekommen, ob ein fremder Staat in Firmen eindringt um Wirtschaftsspionage durchzuführen, oder ob irgendein Geheimdienst die IT Systeme eines Ministeriums knacken will, um politische Spionage zu betreiben, alles das passiert mit genau denselben Methoden, die auch unsere Polizei anwenden müsste, um in System der vermuteten Kriminellen die Software für die Ãœberwachung einzubringen.

Hier liegt die Krux der Sache: im Normalfall will der Staat, dass sich Bürger und Firmen erfolgreich gegen solche Angriffe absichern, aber wenn der Angriff von der Polizei kommt, soll er a) erlaubt sein und b) funktionieren. Ja, ersteres kann man mit einem Beschluss im Parlament durchsetzen, zweiteres nicht. Da sind wir wieder beim gleichen Thema wie oben bei der Kryptografie:

Sicherheitssysteme bewusst so zu schwächen, dass nur “die Guten” die Lücken nutzen können, funktioniert nicht.

Schizophrenie bezüglich Sicherheit vs. Überwachungsmöglichkeiten

Wir haben also sowohl bei der Kryptografie als auch bei der Sicherheit von IT Systemen zwei sich widersprechende Ziele:

  1. Die Systeme sollen so sicher sein, dass Bürger, Firmen und auch Behörden sicher Daten verarbeiten und vertraulich kommunizieren können. Dazu muss die Integrität der verwendeten Systeme nach bestem Stand der Technik geschützt werden und die verwendeten Algorithmen dürfen keine bekannten Schwächen aufweisen. Werden Schwachstellen gefunden, müssen diese sofort an den Hersteller gemeldet werden, damit sie behoben werden können.
  2. Der Staat will zur Verhinderung und zur Aufklärung von Verbrechen einen Einblick in die Kommunikation und die Daten von Verdächtigen bekommen können. In anderen Staaten kommt hier auch noch der Wunsch dazu, Schwachstellen für Geheimdienstaktivitäten und einen potentiellen “Cyberkrieg” zu horten.

Beide Ziele voll zu erfüllen geht schlicht nicht.

Jeder Staat muss für sich entscheiden, welches Ziel Priorität hat.

Davon unbenommen kann der Staat aber auch noch entscheiden, in welchem Rahmen die Polizei Fehler von Verdächtigen in deren Verwendung von IT Systemen (und deren Schwächen) ausnutzen darf.

Ob Österreich jetzt auch bei den üblichen Verdächtigen die Überwachungssoftware einkauft (dass diese im BM.I entwickelt wird, halte ich nicht für realistisch), wird den Markt von 0-days und diesen Lösungen nicht messbar beeinflussen. Wie seriös solche Firmen agieren, konnte man schön an dem HackingTeam Datenleak erkennen. Und natürlich wird auch hier den potentiellen Kunden manchmal mehr versprochen, als man halten kann.

Wenn aber die Grundsatzentscheidung zu den Prioritäten auf staatlicher Seite nicht klar getroffen wird, dann bringt man die Verwaltung und die Exekutive in eine Zwickmühle: sollen sie auf IT-Sicherheit oder auf Überwachungsmöglichkeiten hinarbeiten?

Die Hersteller scheinen ihre Wahl getroffen zu haben: für sie ist Überwachungssoftware Malware und wird bei Erkennung entfernt.