Categories
CERT Politics

A review of the “Concluding report of the High-Level Group on access to data for effective law enforcement”

(Crossposted from the CERT.at blog.)

As I’ve written here, the EU unveiled a roadmap for addressing the encryption woes of law enforcement agencies in June 2025. As a preparation for this push, a “High-Level Group on access to data for effective law enforcement” has summarized the problems for law enforcement and developed a list of recommendations.

(Side note: While the EU Chat Control proposal is making headlines these days and has ben defeated – halleluja -, the HLG report is dealing with a larger topic. I will not comment on the Chat Control/CSAM issues here at all.)

I’ve read this this report and its conclusions, and it is certainly a well-argued document, but strictly from a law enforcement perspective. Some points are pretty un-controversial (shared training and tooling), others are pretty spicy. In a lot of cases, it hedges by using language similar to the one used by the Commission:

In 2026, the Commission will present a Technology Roadmap on encryption to identify and evaluate solutions that enable lawful access to encrypted data by law enforcement, while safeguarding cybersecurity and fundamental rights.

They are not presenting concrete solutions; they are hoping that there is a magic bullet which will fulfill all the requirements. Let’s see what this expert group will come up with.

But first, let’s have a look at the report from the HLG. This is not a full review but highlights the points I find interesting, with a likely bias in the direction of where I disagree.

Chapter I: Digital forensics

Page 8:

Criminals constantly adapt their behaviours to elude detection. Available statistics indicate that criminals are increasingly moving to legitimate end-to-end encrypted platforms. However, once effective countermeasures are found, it is likely that they will move again to different communication channels.

This is a two-edged sword: if criminals use their own dedicated infrastructure, they are making these a perfect target for LE actions (or subterfuge), GhostEncroChat and AN0M are examples for this. If the criminals hide between millions of law-abiding users in popular apps, then any LE action will potentially impact them as well. Any meaningful cooperation by the operators will just drive the criminals again into niche or self-hosted solutions – opening another window for targeted operations. So perhaps having LE access in big platforms will just mean that the big fishes will move to other ponds, leaving only the small fry in the reach of LE nets.

But the main takeaway is the following: Whatever Law Writers and Law Enforcement do; the other side can react. We thus need to plan and strategize not just for the here and now but also think about countermoves and unintended consequences.

Page 9:

Access to digital evidence is considered to play a key role in 85 % of investigations.

Has the resource allocation within LE moved in parallel with the shift of crime to the online space?

While lawful access to data for law enforcement purposes is at the core of providing our citizens with the highest possible level of security, this must not be at the expense of fundamental rights or the cybersecurity of systems and products.

I’m really glad that this report acknowledges these two counterbalancing requirements. While I have a strong personal opinion on the fundamental rights aspect, the effect on the cybersecurity of systems and products is of professional interest to me. This blogpost will thus focus on the cybersecurity impact, not the human rights one.

Page 10:

Strong accountability is crucial. In our democratic societies, it is the responsibility of lawmakers to establish the conditions for such accountability, ensuring a high level of privacy and security.

That is the theory. In practice we all know that accountability for LE overreach in practice is, let’s say it that way, spotty. The report from the European Parliament documents the abuse of LE powers, but news of real accountability for those transgressions have been scarce.

Call me jaded, but the only way that LE can convince the population that “yes, this time, for these new power, we well strictly follow the law and have accountability if abuses happen” are not empty promises is to effectively police themselves regarding past abuses.

Page 10:

Cybersecurity of products and services and lawful access to data both stem from legal obligations and must be able to coexist.

This is the challenge in a nutshell. None of the proposals I have seen managed to thread that needle. I specifically asked exactly this question at the EC3 during my latest visit, and the answers all came down to

  • We need it, there must be a way
  • Just be more creative and think outside the box
  • I once talked to someone who claimed to have a solution

The Recommendation Cluster 1 is fine.

Page 20, Recommendation Cluster 2:

2. setting up a process dedicated to the exchange of capacities that potentially involve the use of vulnerabilities, which would allow knowledge and resources to be pooled while ensuring that the confidentiality and sensitivity of the information would be respected.

3. possibly exploring a European approach to the management and disclosure of vulnerabilities handled by law enforcement, based on existing good practices.

Existing good practices are called “CVD – coordinated vulnerability disclosure”, and the aim is to get vulnerabilities fixed as comprehensively and quickly as possible. This is what NIS2 requires CERTs to do. This is what the CRA demands that suppliers do.

The idea that we keep vulnerabilities open for our own use is completely anathema to the thinking and the mission of the cybersecurity community.

Page 21:

Though it is still sometimes key to investigations, the exploitation of vulnerabilities must be handled with extreme care, in compliance with the relevant domestic legal framework, as it impinges on the security posture of hardware and software.

No shit, Sherlock!

We really need to differentiate here. Exploiting an operations error or programming mistake on the side of the criminals is fine. If they don’t secure their Ransomware management infrastructure, then by all means, let LE break in and do their investigations. But if the vulnerability is in a generic software product that is being used by millions of citizens, then things change dramatically. I do not think that even “extreme care” can overcome the downsides here.

The HLG experts invite the European Commission’s JRC to explore the feasibility of setting out a European approach for the management and disclosure of vulnerabilities, handled by law enforcement, based on existing good practices

This clashes with the CRA and NIS2 regulations, as well as national CVD policies.

The Recommendation Cluster 3 is fine.

Page 23, Recommendation Cluster 4:

1. developing a platform (SIRIUS or equivalent) for sharing tools, best practices and knowledge on how to be granted access to data by product owners, producers and hardware manufacturers.

4. establishing a research group to assess the technical feasibility of built-in lawful access obligations (including for accessing encrypted data) for digital devices, while maintaining and without compromising the security of devices and the privacy of information for all users, as well as without weakening or undermining communications security.

“Owners” is fine – if they want to give access to LE to their own devices, so be it. But secure and targeted LE access in products is a chimera.

For me, the important distinction between a product and a service is the following:

A product is built by the manufacturer and then delivered in (mostly) the same state to multiple customers. In many cases, old-school shrink-wrapped software, the supplier doesn’t even know who the customer is. And once the product has been shipped, the influence of the vendor on the operation of the product is very limited. This can be best explained by Open-Source products: If I install Debian Linux on my Laptop, use LUKS to do full-disk encryption and give LE a reason to do full forensics on that machine without my cooperation: what can LE do? They could go and ask Debian and the answer will be: “Otmar probably (we don’t know for sure; we don’t keep track who uses our product) runs LUKS with the default encryption settings, we don’t know any way to bypass that encryption. If we knew a way, then millions of other users would be in danger, thus we would have fixed the defect as soon as we learned of it.” There is no way to implement real security in a product (without some sort of key-escrow service) while still give LE access.

A service is different thing: here the vendor is directly involved in handling the data of its customers, there is at least a change to special-case this single customer once a court-order arrives at the door.

Service vs. product is not strictly binary, though. Products need updates, giving vendors a chance to influence what’s running at a specific customer. WhatsApp and other such OTT services combing a Product (the App) with a Service (the cloud-component).

For example, in the case of video surveillance recordings, LEAs are increasingly faced with encrypted files that cannot be analysed by automatic software, especially when large quantities of video are involved.

That’s an easy one, and not only for LEAs, as citizens run into the same issue with e.g. TV time-shifting disks attached to TVs. It should be possible for the owners of a device to have unencumbered access to content they own, or where there is a legal right to access it.

In parallel, more transparent solutions enabling access to data in clear on seized devices should be considered, to increase the effectiveness of investigations and, at the same time, ensure a level playing field among industry players, while preserving cybersecurity and safeguarding privacy.

No. This will not work. See the argument about the security of products from above.

A key action under this technology roadmap would be to assess the technical feasibility of built-in lawful access obligations (including for accessing encrypted data and encrypted CCTV recordings) for digital files and devices, while ensuring strong cybersecurity safeguards and without weakening or undermining communications security. This assessment would be carried out involving all relevant stakeholders.

I give you the CCTV case (give the owners the possibility to do bulk export in clear data), but for the rest, I just don’t see a solution that still “ensures strong cybersecurity safeguards and does not weaken or undermine communications security”.

Chapter II: Data retention

The majority of points raised here are sensible.

Page 34, Recommendation Cluster 6:

ensuring that Member States can enforce sanctions against electronic and other communications services providers which do not cooperate with regard to the retention and provision of data, e.g. through the implementation of administrative sanctions or limits on their capacity to operate in the EU market.

Given how miserably the EU fails to enforce EU law with respect to big US companies, I’m not optimistic that this will work. See also online gambling and similar “services” which might be popular, but of unclear legality.

Chapter III: Lawful interception

Page 39:

In contrast, the UK, under the Investigatory Powers Act, has set up a framework for lawful interception of OTT communications which, thanks to the adoption of the UK-US data access agreement, also applies to OTT services based in the US. According to relevant UK authorities, this makes a significant difference in crime prevention and investigations.

Citation needed.

Page 40:

In landmark case C-670/22, the CJEU embraced a broad concept of ‘interception of telecommunications’, holding that the infiltration of terminal devices for the purpose of gathering traffic, location and communication data from an internet-based communication service constituted an ‘interception of telecommunications’.

I wasn’t aware of that case, I should probably read the judgment.

Page 41:

However, the increasing complexity of communication infrastructures and protocols in 5G, such as virtualisation, network slicing, edge computing and privacy-enhanced features, poses new technological challenges for traditional operators. The HLG experts insisted notably on challenges pertaining to Home Routing and to Rich Communication Services (RCS).

The authors are right that the changes in technology have a clear impact on which options are even there for lawful interception. The “Europol position paper on Home routing” also sounds interesting. 

Page 41/42:

Finally, the HLG experts highlighted that one of the main technical challenges posed to LEAs comes from end-to-end encryption, notably for OTT communications, with more than 80 % of communications being run through end-to-end encrypted services (live communications and back-up storage), thus preventing investigators from accessing communication content. At the same time, the experts also agree that end-to-end encryption is considered a robust security measure which effectively protects citizens from various forms of crime. By ensuring that only the communicating users can access the content of their messages, end-to-end encryption effectively protects against unlawful eavesdropping, data theft, state-sponsored espionage and other forms of unauthorised access by hackers, cybercriminals, or even the service providers themselves.

This is the crux of the matter in a nutshell.

There is a legitimate need to protect communication from eavesdropping, and the possible adversaries range from the operators themselves up to state-sponsored espionage. And despite this, when law enforcement comes in waving a magic paper signed by a judge, then all those technical defences need to stand aside and enable “lawful interception”. Like Moses parting the sea, the Light of Galadriel causing orcs to flee, holy water repelling vampires, or any other magic device from the realm of human fantasy that can save a tricky situation.

How exactly this magic can be worked, this paper does not reveal.

Page 42:

Law enforcement representatives would prefer an approach that requires companies to provide law enforcement with access to data in clear under strict conditions. It should be noted, however, that cybersecurity experts raised concerns that such solutions would undermine cybersecurity.

Three points here:

This is thinking in “services”, not in “products”. Signal, the cloud service, does not deal in messages at all, it provides authentication and a publish-subscribe message bus for generic communication. It’s only the “product”, the Signal App, that turns all this into a messaging platform. The “company” has as much knowledge of the cleartext communication as Canon knows about the images the cameras take that it sold to customers.

That’s why the “undermine cybersecurity” point come in: in order to be able to give LE any leg up in accessing the cleartext, the company has to undermine the security properties of end-to-end encrypted communication. If, for example, Signal were able to give LE enough information that LE can decrypt communication data received via a wiretap, then Signal itself would be able to decrypt the messages, as they are relayed via Signal’s servers. This clearly contradicts the quote from above: “By ensuring that only the communicating users can access the content of their messages, end-to-end encryption effectively protects against unlawful eavesdropping, data theft, state-sponsored espionage and other forms of unauthorised access by hackers, cybercriminals, or even the service providers themselves.”

Once you open a door for LEAs, then other players (including LEAs from non-democratic countries) will also come knocking. And in a number of countries, those players (services, state security, military) operate under a completely different legal regime. For example, from Wikipedia: “A national security letter (NSL) is an administrative subpoena issued by the United States government to gather information for national security purposes. NSLs do not require prior approval from a judge.” These agencies might be constrained regarding their own citizens, but for foreigners there is usually very little oversight. Holding LEA to a hight legal standard is thus not enough. Any solution that enables LEA access must somehow be able to deny the same access to organisation with a bigger bludgeon to enforce compliance.

Page 44:

As a result, the HLG experts consider it a priority to ensure that obligations on lawful interception of available data apply in the same way to traditional and non-traditional communication providers and are equally enforceable. The harmonisation of such obligations should serve to overcome the challenges related to the execution of cross-border requests.

From the LEA side, this is an understandable objective. It misses an important point, though: “traditional” and “non-traditional communication providers” are so fundamentally different, that transferring approaches from one side to the other just doesn’t work. It starts with territoriality/jurisdiction, touches the services vs. product mismatch and end with the old difference between the old telco networks and the Internet: Are services provided by the network or by the end-points. If you look at the design of Signal and others, all the security properties are in the client, not the server.

Thus, the approach that worked for the old network just doesn’t fit the new one.

The HLG report is not giving any guidance how to get there.

Second, it is necessary to reach an agreement on high-level operational requirements that clearly states what is expected by national authorities in terms of lawful interception and what the associated safeguards should be. LEON has been identified as a good basis for defining law enforcement requirements. This document should be accompanied by requirements on e.g. proportionality, oversight and transparency, possibly distinguishing between the rules applicable to content and non-content data, with full respect for cybersecurity and data protection and privacy and without undermining encryption.

This is an important point here, and I really think we need to hammer this down.

We first need to agree on a set of requirements for the “lawful interception without weakening cyber security & fundamental right” solution. This document is a great summary of what the LEA side wants; we need a similar document that describes the requirements on the cyber security side of the equation.

Or in other words, we need to checklist according to which we can score any proposed solution X. Some ideas:

  • Does X restrict what software citizens can install on their devices?
  • Does X undermine the goal of having people trust automatic updates of software?
  • Does X lead to more people rooting their phones and side-loading applications?
  • Does X respect mobile users that travel between jurisdictions?
  • Does X also work for Open Source software?
  • Does X need to be undetectable by the user under surveillance?
  • Does X undermine the security of non-targeted users?
  • What is the abuse potential for X and which guardrails are in place?

We should agree on such a list of requirements before we embark on the quest to find a solution.

Third, the concept of territorial jurisdiction needs to be clarified in terms of its applicability to OTT services, taking into account the divergent interpretations among national authorities and, most importantly, between national authorities and OTT providers.

This is also an interesting point: Jurisdiction. In contrast to an old-fashioned land-line telephone, mobile phone using an OTT communication service are, well, mobile. They can travel. They can leave the current jurisdiction. So what happens if LEA in country Y gets a warrant and support from the OTT service to do wiretapping, and now the suspect travels to country Z. Does the wiretap need to stop? What happens if spyware was used on the suspects phone? Can LEA from Y legally wiretap the communication of a suspect in a different jurisdiction?

The points 3,4 and 5 from the Recommendation Cluster 7 capture these questions.

Number 4 is crucial: “[…] no measure should entail an obligation for providers to adjust their ICT systems in a way that would negatively impact the cybersecurity of their users”.

At a recent conference I heard a presentation about the Australian law for LEA access to communication content. They do these things in three levels: Asking nicely for help (TAR), force operators to help (TAN) and order that capabilities need to be implemented (TCN). The latter has a strong restriction: “Importantly, a TCN is expressly prohibited from requiring the building of a capability to decrypt information or remove electronic protection.”

In other words, Australia can demand from communication providers that they build the infrastructure to enable wiretapping, but they can not be forced to lower the inherent security of communication protocols.

Page 46:

Step 2: […] In addition, the HLG experts stressed the urgent need to improve the efficiency of cross-border lawful interception requests under the current framework, while carrying out the work outlined above.

Indeed. The current time-penalty LE is paying for any cross-border interaction is just not sustainable.

Page 47, Recommendation Cluster 8:

To ensure that a broad range of providers of ECS, including OTT providers, respond to lawful interception requests as set out in national laws

This is getting tricky. We have seen this play out in other areas, e.g. access to online betting services, pharmacies or other services that are deemed non-compliant in one country. We always deride non-EU countries that block access to Wikipedia, independent media, social media and other “undesirable” content. What usually follows is an uptick of the use of VPNs and other means of working around blocks. And to be honest, why should an OTT service from south-east Asia care about Austrian law and the wishes of our LEAs? Where do we end here? Try to block these services on the DNS or network layer? Make it illegal to use them? As the report puts it:

HLG experts agreed that any initiative to foster or impose lawful interception rules on all type of ECS should come with a clear and enforceable framework for taking action against communication providers that operate illegally and/or refuse any form of cooperation with law enforcement.

The authors almost get it

Furthermore, the differences between lawful interception rules across the EU place burdensome requirements upon regulated entities such as OTT providers, potentially creating market access barriers for communication providers.

but miss the elephant in the room: It’s not about differences in legislation between EU member states, this is a global competition. I don’t worry about a company moving from Germany to Spain, I worry about all those OTT services moving to offshore locations. See Proton’s move away from Switzerland for an example.

Page 48, Recommendation Cluster 9:

Based on further analysis and an impact assessment, the experts recommend devising an EU instrument on lawful interception (consisting of soft-law or binding legal instruments) for law enforcement purposes that would establish enforceable obligations for providers of ECS in the EU.

I’m a mathematician by training. We often get derided for this, but here is fits perfectly: Before trying to find a solution to a problem, it might be worthwhile to first consider the question if a solution does exist in the first place. So yes: first do the “further analysis and an impact assessment”, and if – and only if – we can find a technical solution satisfying the requirements, then we can start to write laws.

This is what is starting to happen in the EU right now: Experts have been invited to think about this challenge and let’s see what they will come up with.

It certainly is not an easy assignment.

Categories
CERT Pet Peeves

NIS2 in Austria

We still don’t have a NIS2 law in Austria. We’re now more than a year late. As I just saw Süleyman’s post on LinkedIn I finally did the quick photoshop job I planned to do for a long time.

Original:

See https://en.wikipedia.org/wiki/Joel_Pett

NIS2 Version:

(Yes, this is a gross oversimplification. For the public administration side, we really need the NIS2 law, but for private companies who will be forced to conform to security standards: what’s holding you back from implementing them right now?)

Categories
Internet Uncategorized

Browsertab Dump 2025-07-02

I keep accumulating pages in browser tabs that I should read and/or remember, but sometimes it’s really time to clean up.

Categories
Internet

LLM as compression algorithms

Back when I was studying computer science, one of the interesting bits was the discussion of the nformation content in a message which is distinct to the actual number of bits used to transmit the same message. I can remember a definition which involved the sum of logarithms of long-term occurrences versus the transmitted messages. The upshot was, that only if 0s and 1s are equally distributed, then each Bit contains one bit worth of information.

The next iteration was compressibility: if there are patterns in the message, then a compression algorithm can reduce the number of bits needed to store the full message, thus the information content in original text does not equal its number of bits. This could be a simple Huffman encoding, or more advanced algorithms like Lempel-Ziv-Welch, but one of the main points here is that the algorithm is completely content agnostic. There are no databases of English words inside these compressors; they cannot substitute numerical IDs of word for the words themselves. That would be considered cheating in the generic compression game. (There are, of course, some instances of very domain-specific compression algorithms which do build on knowledge of the data likely to be transmitted. HTTP/2 or SIP header-compression are such examples.)

Another interesting progress was the introduction of lossy compression. For certain applications (e.g., images, sounds, videos) it is not necessary to be able to reproduce the original file bit by bit, but only to generate something that looks or sounds very similar to the original media. This unlocked a huge potential for efficient compression. JPEG for images, MPEG3 for music and DIVX for movies reached the broad population by shrinking these files to manageable sizes. They made digital mixtapes (i.e., self-burned audio CDs) possible, CD-ROMs with pirated movies were traded in school yards and Napster started the online file-sharing revolution.

Now we have the LLMs, the large language models which are an implementation of generative AI: Algorithms, combined with a large memory derived by processing huge amounts of content, can now transform texts, images, sounds and even videos into each other. They can act as compressors: you can feed text into an LLM and ask for a summary, but you can also ask it to expand an argument from a few bullet points into a short essay. The inner state of the LLM while it performs these actions kind of represents the essence of the content it is processing. The output format is independent of this state: in the simplest case, you can specify whether the output should be in German or in English, additionally, you can ask for different styles: write for children, write dry legal prose, be witty or even write the content as a poem. Translating from one medium to another is also possible: the AI can look at a picture and generate a textual description of the image, or vice-versa, it can create a picture out of a written content summary.

I’m pretty sure the following scenario has already happened: An employee is asked to write a report on a certain subject: he thinks about the topic, comes up with a few ideas which he writes down as a list of bullet points. These are handed to an LLM with an appropriate prompt to generate a nice 5-page report detailing these points. The AI obliges and the resulting 5-pager is handed to the boss. Being short on time, he doesn’t want to read five pages, so he asks an LLM to summarize the paper to give him the core message in list of short statements. Ideally, the second LLM reproduces the same bullet point which the employee originally came up with, making the whole exercise a complete waste of computation resources.

There are two points in this story which are important to note:

First, if we are liberal with the concept of “lossy compression”, then the specific formulation of an idea in a language doesn’t really matter in terms of information content. If you give an LLM the same prompt time and time again, you will get different results each time. If, for example, you ask for a Limerick about a horse in a bar, you will get different ones almost every time. But on a more abstract level, they are all embodiments of the same concept: a Limerick about a horse in a bar. The same applies to a switch in languages: if you ask the LLM to change the output from German to English, the result will change substantially. But again: if you just look at the abstract ideas embodied in the text, the language it is written in just does not matter.

The bible in Greek, English, or German might have very few words in common, but the content is the same. This is just like converting a picture from GIF to JPEG: The bits in the file have completely changed but given the right parsers they produce the same information content, with only some fuzziness in details caused by the jpeg compression.

Secondly, when processing a prompt or analysing a text/image/sound, the LLM produces an activation pattern in its high-dimensional set of parameters that form the scaffolding of its memory, transforming the input into something that one might call its “state of mind”. This is the LLM-internal representation of the input, abstracting away the unimportant bits of incoming information and retaining the meaning. This internal state is opaque to us, we have little information which parameter corresponds to exactly what concept. I also don’t know the size in Bytes that this representation needs.

Now comes the “generative” part of the AI: the combination of the state, the learned connections between the concept and the prompt enable the LLM to transform this opaque state of mind into an output that humans can understand. The output can be short, e.g., if the prompt asks for a short, written summary, or longer, if the target format is an essay. Coming back the example from above: the LLM does not iteratively compress a longer text into a summary by analysing individual sentences, instead it speedreads everything into something like short-term memory and then dumps out the highlights it found.

If a short prompt can produce the same activation pattern as a long input text, then the information content is the same. This only works because the LLM has this huge storage of knowledge it can reference – something we said in the beginning that classic compression algorithms cannot utilize. So, as an example, the input “lyrics of the Beatles’ song Yesterday” and the actual lyrics as two dozen lines of text convey the same information to the LLM. This enables truly enormous compression rates.

To summarize, it might be a helpful abstraction to view LLMs as lossy compression/de-compression machines that can utilize an enormous pool of knowledge to make the process much more efficient, as long as you accept the fact that this a very lossy compression which only preserves the core concepts contained in the input but is free to change the representation of this information content. And, of course, it is prone to make wrong associations and hallucinate content.

Categories
CERT Internet

Feedback to the NIS2 Implementing Acts

The EU is asking for feedback regarding the Implementing Acts that define some of the details of the NIS2 requirements with respect to reporting thresholds and security measures.

I didn’t have time for a full word-for-word review, but I took some time today to give some feedback. For whatever reason, the EU site does not preserve the paragraph breaks in the submission, leading to a wall of text that is hard to read. Thus I’m posting the text here for better readability.

Feedback from: Otmar Lendl

We will have an enormous variation in size of relevant entities. This will range from a 2-person web-design and hosting team who also hosts the domains of its customers to large multinational companies. The recitals (4) and (5) are a good start but are not enough.

The only way to make this workable is by emphasising the principle of proportionality and the risk-based approach. This can be either done by clearly stating that these principles can override every single item listed in the Annex, or consistently use such language in the list of technical and methodological requirements.

Right now, there is good language in several points, e.g., 5.2. (a) “establish, based on the risk assessment”, 6.8.1. “[…] in accordance with the results of the risk assessment”, 10.2.1. “[…] if required for their role”, or 13.2.2. (a) “based on the results of the risk assessment”.

The lack of such qualifiers in other points could be interpreted as that these considerations do not apply there. The text needs to clearly pre-empt such a reading.

In the same direction: exhaustive lists (examples in 3.2.3, 6.7.2, 6.8.2, 13.1.2.) could lead to auditors doing a blind check-marking exercise without allowing for entities to diverge based on their specific risk assessment.

A clear statement on the security objective before each list of measures would also be helpful to guide the entities and their auditors to perform the risk-based assessment on the measure’s relevance in the concrete situation. For example, some of the points in the annex are specific to Windows-based networks (e.g., 11.3.1. and 12.3.2. (b)) and are not applicable to other environments.

As the CrowdStrike incident from July 19th showed, recital (17) and the text in 6.9.2. are very relevant: there are often counter-risks to evaluate when deploying a security control. Again: there must be clear guidance to auditors to also follow a risk-based approach when evaluating compliance.

The text should encourage adoption of standardized policies: there is no need to re-invent the wheel for every single entity, especially the smaller ones.

Article 3 (f) is unclear, it would be better to split this up in two items, e.g.:

(f1) a successful breach of a security system that led to an unauthorised access to sensitive data [at systems of entity] by an external and suspectedly malicious actor was detected.

(Reason: a lost password shouldn’t cause a mandatory report, using a design flaw or an implementation error to bypass protections to access sensitive data should)

(f2) a sustained “command and control” communication channel was detected that gives a suspectedly malicious actor unauthorised access to internal systems of the entity.

Categories
Internet

Browsertab Dump 2024-07-23

I keep accumulating pages in browser tabs that I should read and/or remember, but sometimes it’s really time to clean up. So I’m trying something new: dump the links here in a blog post.

Categories
CERT System Administration

RT: Different From: for a certain user

At CERT.at, we recently changed the way we send out bulk email notifications with RT: All Correspondence from the Automation user will have a different From: address compared to constituency interactions done manually by one of our analysts.

How did I implement this?

  • The first idea was to use a different queue to send the mails, and then move the ticket to the normal one: The From: address is one of the things that can be configured on a queue-level.
  • Then I tried whether I can override the From: header by including it in the Correspondence Template. Yep, that works. Thus idea two: modify the Scrips in a way to have different ones trigger based on who is running the transaction.
  • But it’s even simpler: the Template itself can contain Perl code, so I can just do the “if then else” thingy inside the Template.

In the end, it was just a one-liner in the right spot. Our “Correspondence”-Template looks like this now:

RT-Attach-Message: yes
Content-Transfer-Encoding: 8bit{($Transaction->CreatorObj->Name eq 'intelmq') ? "\nFrom: noreply\@example.at" : ""}

{$Transaction->Content()}

The “Content-Transfer-Encoding: 8bit” was needed because of the CipherMail instance, without it we could get strange MIME encoding errors.

Categories
CERT Internet Pet Peeves

Roles in Cybersecurity: CSIRTs / LE / others

(Crossposted from the CERT.at blog)

Back in January 2024, I was asked by the Belgian EU Presidency to moderate a panel during their high-level conference on cyber security in Brussels. The topic was the relationship between cyber security and law enforcement: how do CSIRTs and the police / public prosecutors cooperate, what works here and where are the fault lines in this collaboration. As the moderator, I wasn’t in the position to really present my own view on some of the issues, so I’m using this blogpost to document my thinking regarding the CSIRT/LE division of labour. From that starting point, this text kind of turned into a rant on what’s wrong with IT Security.

When I got the assignment, I recalled a report I had read years ago: “Measuring the Cost of Cybercrime” by Ross Anderson et al from 2012. In it, the authors try to estimate the effects of criminal actors on the whole economy: what are the direct losses and what are costs of the defensive measures put in place to defend against the threat. The numbers were huge back then, and as various speakers during the conference mentioned: the numbers have kept rising and rising and the figures for 2024 have reached obscene levels. Anderson et al write in their conclusions: “The straightforward conclusion to draw on the basis of the comparative figures collected in this study is that we should perhaps spend less in anticipation of computer crime (on antivirus, firewalls etc.) but we should certainly spend an awful lot more on catching and punishing the perpetrators.”

Over the last years, the EU has proposed and enacted a number of legal acts that focus on the prevention, detection, and response to cybersecurity threats. Following the original NIS directive from 2016, we are now in the process of transposing and thus implementing the NIS 2 directive with its expanded scope and security requirements. This imposes a significant burden on huge numbers of “essential” and “important entities” which have to heavily invest in their cybersecurity defences. I failed to find a figure in Euros for this, only the estimate of the EU Commission that entities new to the NIS game will have to increase their IT security budget by 22 percent, whereas the NIS1 “operators of essential services” will have to add 12 percent on their current spending levels. And this isn’t simply CAPEX, there is a huge impact on the operational expenses, including manpower and effects on the flexibility of the entity.

This all adds up to a huge cost for companies and other organisations.

What is happening here? We would never ever tolerate that kind of security environment in the physical world, so why do we allow it to happen online?

The physical world

So, let’s look at playing field in the physical environment and see how the security responsibilities are distributed there:

Defending against low-level crime is the responsibility of every citizen and organisation: you are supposed to lock your doors, you need to screen the people you’re allowing to enter and the physical defences need to sensible: Your office doesn’t need to be a second Fort Knox, but your fences / doors / gates / security personnel need to be adequate to your risk profile. They should be good enough to either completely thwart normal burglars or at least impose such a high risk to them (e.g., required noise and time for a break-in) that most of them are deterred from even trying.

One of the jobs of the police is to keep low-level crime from spiraling out of control. They are the backup that is called by entities noticing a crime happening. They respond to alerts raised by entities themselves, their burglar alarms and often their neighbours.

Controlling serious, especially organized crime is clearly the responsibility of law enforcement. No normal entity is supposed to be able to defend itself against Al Capone style gangs armed with submachine guns. This is where even your friendly neighbourhood cop is out of his league and the specialists from the relevant branches of the security forces need to be called in. That doesn’t mean that these things never happen at all: there is organized crime in the EU, and it might take a few years before any given gang is brought under control.

Defending against physical incursions by another country is the job of the military. They have the big guns; they have the training and thus means to defend the country from outside threats. Hopefully, they provide enough deterrence that they are not needed. Additionally, your diplomats and politicians have worked to create an international environment in which no other nation even contemplates invading your country.

We can see here a clear escalation path of physical threats and how the responsibility to deal with them shifts accordingly.

The online world

Does the same apply to cyber threats? And if not, why?

The basics

The equivalent of putting a simple lock on your door is basic cyber hygiene: Firewalls, VPNs, shielding management interfaces, spam and malspam filters, a decent patch management, as well as basic security awareness training. Hopefully, this is enough to stop being a target of opportunity, where script kiddies or mass exploitation campaigns can just waltz into your network. But there is a difference: the risk of getting caught simply for trying to hack into a network is very low. Thus, these actors can just keep on trying over and over again. Additionally, this can be automated and run on a global scale.
In the real word, intrusion attempts do not scale at all. Every single case needs a criminal on site and that limits the number of tries per night and incurs a risk of being caught at each and every one of these. The result is that physical break-in attempts are rare, whereas cyber break-in attempts are so frequent that the industry has decided that “successful blocks on FW or mail-relay level per day” are no longer sensible metrics for a security solution.

And just forget about reporting these to the police. Not all intrusion attempts are actually malicious (a good part of CERT.at’s data-feeds on vulnerabilities is based on such scans), the legal treatment of such acts are unclear (especially on an international level), and the sheer mass of it overwhelms all law enforcement capabilities. Additionally, these intrusion attempts usually are cross-border, necessitating an international police collaboration. The penalties for such activities (malicious scans, sending malspam, etc.) are also often too low to qualify for international efforts.

In the physical world, the perpetrators must be present at the site of their victims. We’re not yet at the stage where thieves and burglars send remote controlled drones to break into houses and steal valuables there – unless you count the use of hired and expendable low-level criminals as such. There is thus no question about jurisdiction and the possibility of the local police to actually enforce the law. Collecting clues and evidence might not always be easy, and criminals fleeing the country before being caught is a common trope in crime literature, nevertheless there is the real possibility that the police can successfully track and then arrest the criminals.

The global nature of the Internet changes all this. As the saying goes: there is no geography on the Internet, everyone is a direct neighbour to everybody else. Just as any simple website is open to visitors from all over the world, it can be targeted by criminals from all over the globe. There is no need for the evil hackers to be on the same continent as their targets, let alone in the same jurisdiction. Thus, even if the police can collect all the necessary evidence to identify the perpetrators, it cannot just grab them off the street – they might be far out of reach of the local law enforcement.

And another point is different: usually, physical security measures are quite static. There is no monthly patch-day for your doors. I can’t recall any situation where a vendor of safes or locks had to issue an alert to all customers that they have to upgrade to new cylinders because a critical vulnerability was found in the current version (although watching LPL videos are a good argument that they should start doing that). Recent reports on vulnerabilities of keyless fobs for unlocking of cars show that the lines are starting to blur between these worlds.

Organized crime

What about serious, organized crime? The online equivalent to a mob boss is a “Ransomware as a Service (RaaS)” group: they provide the firepower, they create an efficient ecosystem of crime and they make it easier for low-level miscreants to start their criminal careers. Examples are Locky, REvil, DarkSide, LockBit, Cerber, etc. Yes, sometimes law-enforcement, through long-running, and international collaborations between law-enforcement agencies, is able to crack down on larger crime syndicates. Those take-downs vary in their effectiveness. In some cases, the police manages to get hold of the masterminds, but often enough they just get lower or mid-level people and some of the technical infrastructure, leading just to a temporary reprieve for the victims of the RaaS shop.

Two major impediments to the effectiveness of these investigations are the global nature of such gangs and thus the need for truly global LE collaboration and the ready availability of compromised systems to abuse and malicious ISPs who don’t police their own customers. Any country whose police force is not cooperating effectively creates a safe refuge for the criminals. The current geo-political climate is not helpful at all. Right now, there simply is no incentive for the Russian law enforcement to help their western colleagues by arresting Russian gangs targeting EU or US entities. Bullet-proof hosters are similar, they rent the infrastructure to criminals from which to launch attacks from. And often enough the perpetrators simply use the infrastructure of one of their victims to attack the next.

The end result is that serious cybercrime is rampant. Companies and other organisations must defend themselves against well-financed, experienced, and capable threat-actors. As it is, law enforcement is not capable to lower the threat level low enough to take that responsibility away from the operators.

Nation states

The next escalation step are the nation state attackers. They come in (at least) two types: Espionage and Disruption.

Espionage is nothing new; the employment of spies traces back to antique world. But just as with cybercrime, in the new online world it is no longer necessary to send agents on dangerous missions into foreign countries. No, a modern spy has a 9 to 5 desk job in drab office building where the highest risk to his personal safety is a herniated vertebral disc caused by unergonomic desks and chairs.

It’s been rare, but cyber-attacks with the aim of causing real world disruptions have appeared over the last ten years, especially in the Russia/Ukraine context. The impact can be similar to Ransomware: the IT systems are disabled and all the processes supported by those system will fail. The main difference is that you can’t simply buy your way out of a state-sponsored disruptive attack. There have been cases where the attackers try to inflict physical damage to either the IT systems (bricking of pcs in the Aramco attack) or machinery controlled by industrial control systems.

This is a frustrating situation. We’re in a defensive mode, trying to block and thwart attack after attack from well resourced adversaries. As the recent history shows, we are not winning this fight – cybercrime is rampant and state-sponsored APTs are running amok. Even if one organisation manages to secure its own network, the tight interconnectedness with and dependency of others will leave it exposed to supply chain risks.

What can we do about this?

Such a situation reminds me of the old proverb: “if you can’t win the game, change the rules”. I simply do not see a simple technical solution to the IT security challenge. We’ve been sold these often enough under various names (firewalls, NGFW, SIEMs, AV, EDR, SOAR, cloud-based detection, sandboxes to detected malicious e-mail, …) and while all these approaches have some value, they are fighting the symptoms, but not the cause of the problem.

There certainly are no simple solutions, and certainly none without significant downsides. I’m thus not proposing that the following ideas need to be implemented tomorrow. This article is just supposed to move the Overton Window and start a discussion outside the usual constraints.

So, what ideas can I come up with?

Really invest in Law Enforcement

The statistics show every year that cyber-crime is rising. This is followed by a ritual proclamation of the minister in charge that we will strengthen the police force tasked with prosecuting cyber-crime. The follow-through just isn’t there. Neither the police, nor the judiciary is in any way staffed to really make a dent in cybercrime as a whole.

They are fighting a defensive war, happy with every small victory they can get, but overall they are simply not staffed at a level where they really could make a difference.

Denial of safe havens

Criminals or other attackers need some infrastructure where they stage their attacks from. Why do we tolerate this? Possible avenues for a change are:

  • Revisit the laws that shield ISPs from liabilities regarding misbehaving customers. This does not need to be a complete reversal, but there need to be clear and strong incentives not to allow customers to stage attacks from an ISP’s network. See below for more details.
  • And on the other side, refuse to route the network blocks from ISPs who are known to tolerate criminals on their network. Back on Usenet, this was called the “UDP – Usenet Death Penalty”: when you don’t police your own users’ misbehaviour on this global discussion forum, then other sites will decide not to accept any articles from your cesspool any more.

The aim must be the end of “bulletproof” hosters. There have been prior successes in this area, but we can certainly do better on a global scale.

Don’t spare abused systems

Instead of renting infrastructure from bulletproof hosting outfits, the criminals often hack into an unrelated organisation and then abuse its systems to stage attacks from. Abused systems range from simple C2 proxies on compromised websites, DDoS-amplification, accounts for sending spam-mails to elaborate networks of proxies on compromised CPEs.

These days, we politely warn the owners of the abused devices and ask them nicely to clean up their infrastructure.

We treat them as victims, and not as accomplices.

Maybe we need to adjust that approach.

Mutual assured cyber destruction

As bad as the cold war was, the concept of mutual assured destruction managed to deter the use of nuclear weapons for over 70 years. Right now, there is no functioning deterrence on the Internet.

I can’t say what we need to do here, but we must create a significant barrier to the employment of cyberattacks. Right now, most offensive cyber activities are considered “trivial offences”, maybe worth a few sternly worded statements, but nothing more. The EU Cyber Diplomacy Toolbox is a step in that direction, but is still rather harmless in its impact.

We can and should do more.

Broken Window Theory

From Wikipedia: “In criminology, the broken windows theory states that visible signs of crime, antisocial behavior, and civil disorder create an urban environment that encourages further crime and disorder, including serious crimes.”

To put this bluntly: As we haven’t managed to solve the Spam E-mail problem, why do we think we can tackle the really serious crimes?

Thus, one possible approach is to set aside some investigative resources in the law enforcement community to go after the low-level, but very visible criminals. Take for example the long running spam waves promoting ED pills. Tracking the spam source might be hard, but there is a clear money trail on the payment side. This should be an eminently solvable problem. Track those gangs down, make an example out of them and let every other criminal guess where the big LE guns will be pointing at next.

As a side effect, the criminal infrastructure providers who support both the low level and the more serious cybercrime might also feel the heat.

Offer substantial bounties

We always say that ransomware payments are fuelling the scourge. They provide RaaS gangs with fresh capital to expand their operations and it is a great incentive for further activities in that direction.

So, what about the following: decree by law that if you’re paying a ransom, then you have to pay 10% of the ransom into a bounty fund that incites operators in the ransomware gangs to turn in their accomplices.

Placing bounties on the head of criminals is a very old idea and has proven to be effective to create distrust and betrayal in criminal organisations.

Liability of Service Providers

Criminals are routinely abusing the services offered by legitimate companies to further their misdeeds. Right now, the legal environment is shielding the companies whose services are abuse, from direct liability regarding the action of their customers.

Yes, this liability is usually not absolute, often there is a “knowingly” or “repeatedly” or “right to respond to allegations” in the law that absolve the service providers to proactively search for or quickly react to reports of illegal activities originating from their customers.

We certainly can have a second look at these provisions.

Not all service providers should be treated the same way, a small ISP offering to hosts websites has vastly smaller resources to deal with abuse that the hyper-scalers with billions of Euros stock market valuations. The impact of abuse scales about the same way: a systematic problem at Google is much more relevant than anything a small regional ISP can cause.

Spending the same few percentage points of their respective revenue on countering abuse can give the abuse handling teams of big operators the necessary punch to really be on top of abuse at their platform and do it 24×7 in real-time.

We need to incentivise all actors to take care of the issue.

Search Engine Liability

By using SEO techniques or via simply buying relevant advertisement slots, criminals sometimes manage to lure people looking for legitimate free downloads to fake download sites that offer backdoored versions of the programs that the user is looking for.

Given the fact that this is a very lucrative market for search engine operators, there should be no shortage on resources to deal with this abuse either proactively or in near real time when they are reported.

And I really mean near real-time. Given e.g., Google’s search engine revenue, it is certainly possible to resolve routine complaints within 30 minutes, on a 24×7 coverage. If they are not able to do it, make them both liable for damages caused by their inaction and impose regulatory fines on them.

For smaller companies, the response time requirements can be scaled down to levels that even a mom & pop ISP can handle.

Content Delivery Network liability

The same applies to content delivery networks: such CDNs are often abused to shield criminal activities. By hiding behind a CDN, it becomes harder to take down the content at the source, it becomes tricky to just firewall off the sewers of the Internet and even simple defensive measures like blocking JavaScript execution by domain are disrupted if the CDN serves scripts from their domains.

Cloudflare boasts that a significant share of all websites is now served using their infrastructure. Still, they only commit to a 24h reaction time on abuse complaints for things like investment fraud.

With great market-share comes great responsibility.

We really need to forcibly re-adjust their priorities. It might be a feel-good move for libertarians to enable free speech, and sometimes controversial content really needs protection. But Cloudflare is acting like a polluter who doesn’t really care what damage their actions cause on others.

Even in the libertarian heaven, good behaviour is triggered by internalizing costs by making liabilities explicit.

Webhoster liability

The same applies to the actual hosters of malicious content. In the western world, we need to give webhosters a size-dependent deadline for reacting to abuse-reports. For the countries who do not manage to create and enforce similar laws, the rest of the world need to react by limiting the reachability of non-conforming hosters.

Keeping the IT market healthy

Market monopolies are bad for security. They create a uniform global attack surface and distort the security incentives. This applies both to the software, the hardware/firmware side, the cloud as well as to the ISP ecosystem.

What can the military do?

In the physical word, the military is the ultimate deterrence against nation state transgressions. This is really hard to translate to cyber-security. I mentioned MAD above. This is really tricky: what is the proper way of retaliation? How do we avoid a dangerous escalation of hack, hack-back and hack-back-back?

Or should we relish in the escalation? A colleague recently mentioned that some ransomware gang claimed to have hacked the US Federal Reserve and is threatening to publish terabytes of stolen data. I half joked by replying with “If I were them, I’d start to worry about a kinetic response by the US.”

There are precedents. Some countries are well known to react violently if someone decides to take one of their citizens as hostage. No negotiations. Only retribution with whatever painful means are available.

Some cyber-attacks have a similar impact as violent terrorist attacks, just look at the ripple on effect on hospitals in London following the attack on Synnovis. So why should our response portfolio against ransomware actors rule out some the options we keep open for terrorists?

Free and open vs. closed and secure

Overall, there seems to be two major design decision that have a major cyber security impact.

First, the Internet is a content-neutral, global packet-switched network, for which there is only a very limited consensus regarding the rules that its operators and users should adhere to. And there are even fewer global enforcement possibilities for the little rules that we can agree on.

On one hand, this is good. We do not want to live in a world where the standards for the Internet are set and enforced by oppressive regimes. The global reach of the Internet is also a net positive: it is good that there is a global communication network that interconnects all humans. Just as the phone network connects all countries, the global reach of the Internet has the potential to foster communication across borders and can bring humanity together. We want dissidents in Russia and China to be able to communicate with the outside world.

On the other hand, this leads to the effects described in the first section: geography has no meaning on the Internet; thus, we’re importing the shadiest locations of the Internet right into our living rooms.

We simply can’t have both: a global, content agnostic network that reaches everybody on the planet, and a global network where the behaviour that we find objectionable is consistently policed.

The real decision is thus where to compromise: On “global”, by e.g. declining to be reachable from the swamps of the Internet, or on “security”: live with the dangers that arise from this global connectivity.

The important part here is: this is a decision we need to take. Individually, as organisation and, perhaps, as a country.

We face a similar dilemma with our computing infrastructure: The concept of the generic computer, the open operating systems, the freedom to install third-party programs and the availability of accessible programming frameworks plus a wealth of scripting languages are essential for the speed of innovation. A closed computing environment can never be as vibrant and successful.

The ability to run arbitrary new code is both a boon for innovation, but also creates the danger of malicious code being injected into our system. Retrofitting more control here (application allowlisting, signed applications, strong application isolation, walled garden app-stores, …) can mitigate some of the issues, but will never reach the security properties of system that was designed to run exactly one application and doesn’t even contain the foundations for running additional code.

Again, there is a choice we need to make: do we prefer open systems with all their dangers, or do we try to nail things down to lower the risks? This does not need to be a global choice: we should probably choose the proper flexibility vs. security setting depending on intended use of an IT system. A developer’s box needs not have the same setting as a tablet for a nursing home resident.

Technical solutions – just don’t be easily hackable?

In an ideal world, our IT systems would be perfectly secure and would not be easy pray for cyber-criminals and nation state actors. Yes, any progress in securing our infrastructure is welcome, but we cannot simply rely on this path. Nevertheless, there are a few low hanging fruits we need to take:

Default configurations: Networked devices need to come with defaults that are reasonably secure. Don’t expect users to go through all configuration settings to secure a product that they bought. This can be handled via regulation.

Product liability is also an interesting approach. This is not trivial to get right, but certain classes of security issues are so basic that failing to protect against them amounts to gross negligence in 2024. For example, we recently saw several path traversal vulnerabilities in edge-devices sold in 2024 by security companies with more than a billion-dollar market cap. Sorry, such bugs should not happen in this league.

The Cyber Resilience Act is an attempt to address this issue. I have no clue whether it will actually work out well.

While I hope that we will manage to better design and operate our critical IT infrastructure in the future, this is not the part where I’d put my money on. We’ve been chasing that goal for the last 25 years and it hasn’t been working out so great.

We really need to start thinking outside the box.

Categories
CERT Pet Peeves

On Cybersecurity Alert Levels

Last week I was invited to provide some input to a tabletop exercise for city-level crisis managers on cyber security risks and the role of CSIRTs. The organizers brought a color-coded threat-level sheet (based on the CISA Alert Levels) to the discussion and asked whether we also do color-coded alerts in Austria and what I think of these systems.

My answer was negative on both questions, and I think it might be useful if I explain my rationale here. The first was rather obvious and easy to explain, the second one needed a bit of thinking to be sure why my initial intuition to the document was so negative.

Escalation Ratchet

The first problem with color-coded threat levels is their tendency to be a one-way escalation ratchet: easy to escalate, but hard to de-escalate. I’ve been hit by that mechanism before during a real-world incident and that led me to be wary of that effect. Basically, the person who raises the alert takes very little risk: if something bad happens, she did the right thing, and if the danger doesn’t materialize, then “better safe than sorry” is proclaimed, and everyone is happy, nevertheless. In other words, raising the threat level is a safe decision.


On the other hand, lowering the threat level is an inherently risky decision: If nothing bad happens afterwards, there might be some “thank you” notes, but if the threat materializes, then the blame falls squarely on the shoulders of the person who gave the signal that the danger was over. Thus, in a CYA-dominated environment like public service, it is not a good career move to greenlight a de-escalation.


We’ve seen this process play out in the non-cyber world over the last years, examples include

  • Terror threat level after 9/11
  • Border controls in the Schengen zone after the migration wave of 2015
  • Coming down from the pandemic emergency

That’s why I’ve been always pushing for clear de-escalation rules to be in place whenever we do raise the alarm level.

Cost of escalation

For threat levels to make sense, any level above “green” need to have a clear indication what the recipient of the warnings should be doing at this threat level. In the example I saw, there was a lot of “Identify and patch vulnerable systems”. Well, Doh! This is what you should be doing at level green, too.


Thus, relevant guidance at higher level needs to be more than “protect your systems and prepare for attacks”. That’s a standing order for anyone doing IT operation, this is useless advice as escalation. What people need to know is what costs they should be willing to pay for a better preparation against incidents.


This could be a simple thing like “We expect a patch for a relevant system to be released out of our office-hours, we need to have a team on standby to react as quickly as possible, and we’ve willing to pay for the overtime work to have the patch deployed ASAP.”. Or the advice could be “You need to patch this outside your regular patching cadence, plan for a business disruption and/or night shifts for the IT people.” At the extreme end, it might even be “we’re taking service X out of production, the changes to the risk equation mean that its benefits can’t justify the increased risks anymore.”.


To summarize: if there were no hard costs to a preventative security measure, then you should have implemented them a long time ago, regardless of any threat level board.

Counterpoint

There is definitely value in categorizing a specific incident or vulnerability in some sort of threat level scheme: A particularly bad patch day, or some out-of-band patch release by an important vendor certainly is a good reason that the response to the threat should also be more than business-as-usual.


But a generic threat level increase without concrete vulnerabilities listed or TTPs to guard against? That’s just a fancy way of saying “be afraid” and there is little benefit in that.

Postscript: Just after posting this article, I stumbled on a fediverse post making almost the same argument, just with April 1st vs. the everyday flood of misinformation.

Categories
Internet Pet Peeves

Kafka wohnt in der Lassalle 9

Ich bin wohl nicht der einzige technikaffine Sohn / Schwiegersohn / Neffe, der sich um die Kommunikationstechnik der älteren Generation kümmern muss. In dieser Rolle habe ich gerade was hinreichend Absurdes erlebt.

Angefangen hat es damit, dass A1 angekündigt hat, den Telefonanschluss einer 82-jährigen Dame auf VoIP umstellen zu wollen. Das seit ewigen Zeiten dort laufende „A1 Kombi“ Produkt (POTS + ADSL) wird aufgelassen, wir müssen umstellen. Ok, das kam jetzt nicht so wirklich überraschend, in ganz Europa wird das klassische Analogtelefon Schritt für Schritt abgedreht, um endlich die alte Technik loszuwerden.

Also darf ich bei A1 anrufen, und weil die Dame doch etwas an ihrer alten Telefonnummer hängt, wird ein Umstieg auf das kleinste Paket, das auch Telefonie beinhaltet, ausgemacht. Also „A1 Internet 30“. 30/5 Mbit/s klingt ja ganz nett am Telefon, also bestellen wir den Umstieg (CO13906621) am 18.11.2023. Liest man aber die Vertragszusammenfassung, die man per Mail bekommt, so schaut das so aus:

Da die Performance der alten ADSL Leitung auch eher durchwachsen und instabil war (ja, die TASL ist lang), erwarte ich eher den minimalen Wert, was einen Faktor 2 bzw. 5 weniger ist als beworben. Das Gefühl, hier über den Tisch gezogen worden zu sein, führt zu dem Gespräch: „Brauchst du wirklich die alte Nummer? Die meisten der Bekannten im Dorf sind doch schon nur mehr per Handy erreichbar.“

Ok, dann lassen wir die Bedingung „Telefon mit alter Nummer“ sausen und nehmen das Rücktrittsrecht laut Fern- und Auswärtsgeschäfte-Gesetz in Anspruch und schauen uns nach etwas Sinnvollerem um. An sich klingt das „A1 Basis Internet 10“ für den Bedarf hier angebracht, aber wenn man hier in die Leistungsbeschreibungen schaut, dann werden hier nur „0,25/0,06 Mbit/s“ also 256 kbit/s down und 64 kbit/s up wirklich zugesagt. Meh. So wird das nichts, daher haben wir den Umstieg storniert und den alten Vertrag zum Jahresende gekündigt – was auch der angekündigte POTS-Einstellungstermin ist.

Der Rücktritt und die Kündigung wurden telefonisch angenommen und auch per Mail bestätigt.

So weit, so gut, inzwischen hängt dort ein 4g Modem mit Daten-Flatrate und VoIP-Telefon, was im Großen und Ganzen gut funktioniert.

Die nächste Aktion von A1 hatte ich aber nicht erwartet: In der Schlussabrechnung nach der Kündigung von Mitte Jänner war folgender Posten drinnen:

"Restgeld für vorzeittige Vertragsauflösung: 381 €

Und da 82-jährige manchmal nicht die besten E-Mail Leserinnen sind, ist das erst aufgefallen, als die Rechnung wirklich vom Konto eingezogen wurde.

Das „Restentgelt“ macht in mehrerer Hinsicht keinen Sinn: der „A1 Kombi“ Vertrag läuft seit mehr als 10 Jahren, und ich hatte bei der initialen Bestellung auch gefragt, ob irgendwelche Vertragsbindungen aktiv sind. Und das Ganze hat überhaupt erst angefangen, weil A1 die „A1 Kombi“ einstellt, aber jetzt wollen sie uns genau dieses aufgelassene Produkt bis Ende 2025 weiterverrechnen.

Also ruf ich bei der A1 Hotline an, in der Annahme, dass man dieses Missverständnis schnell aufklären kann, wahrscheinlich hat einfach das Storno des Umstiegs den Startzeitpunkt des Vertrags im System neu gesetzt. So kann man sich täuschen:

  • Per Telefon geht bei ex-Kunden rein gar nichts mehr. Der Typ an der Hotline hat komplett verweigert, sich die Rechnung auch nur anzusehen.
  • Man muss den Rechnungseinspruch schriftlich einbringen. Auf die Frage nach der richtigen E-Mail-Adresse dafür war die Antwort „Das geht nur über den Chatbot.“
  • Also sagte ich der „Kara“ so lange, dass mir ihre Antworten nicht weiterhelfen, bis ich einen Menschen dranbekomme, dem ich dann per Upload den schriftlichen Einspruch übermittle.
  • Nach Rückfrage bei der RTR-Schlichtungsstelle haben wir den Einspruch auch noch schriftlich per Einschreiben geschickt.

Wir haben hier ein Problem.

Ein Konzern zieht einer Pensionistin 400+ EUR vom Konto ein, weil sie einen Fehler in ihrer Verrechnung haben, und verweigern am Telefon komplett, sich das auch nur anzusehen. Laut RTR haben sie 4 Wochen Zeit, auf die schriftliche Beschwerde zu reagieren.

Ja, wir könnten bei der Bank den Einzug Rückabwickeln lassen, aber da ist dann A1 (laut Bank) schnell beim KSV und die Scherereien wollen wir auch nicht. Sammelklagen gibt es in Österreich nicht wirklich. Ratet mal, wer da dagegen Lobbying macht. Schadenersatz für solche Fehler? Fehlanzeige.

So sehr das US Recht oft idiotisch ist, die Drohung von hohen „punitive damages“ geht mir wirklich ab. Wo ist die Feedbackschleife, dass die großen Firmen nicht komplett zur Service-Wüste werden?

Wenn ich aus Versehen bei einer Garderobe den falschen Mantel mitnehme, und dem echten Eigentümer, der mich darauf anspricht nur ein „red mit meinem chatbot oder schick mir einen Brief, in 4 Wochen kriegst du einen Antwort“ entgegne, dann werde ich ein Problem mit dem Strafrecht bekommen.

Wie lösen wir sowas in Österreich? Man spielt das über sein Netzwerk. Mal sehen, wie lange es nach diesem Blogpost (plus Verteilung des Links an die richtigen Leute) braucht, bis jemand an der richtigen Stelle sagt „das kann’s echt nicht sein, liebe Kollegen, fixt das jetzt.“.

Update 2024-02-09: Eine kleine Eskalation über den A1 Pressesprecher hat geholfen. Die 1000+ Kontakte auf LinkedIn sind dann doch zu was gut.

Update 2024-04-05: Schau ich doch mal kurz auf die A1 Homepage, und was seh ich?

Gerichtsbeschluss

Anscheinend war des dem VKI auch zu bunt, wie unseriös die A1 mit Bandbreiten geworben hat.