Categories
CERT Internet Pet Peeves

Roles in Cybersecurity: CSIRTs / LE / others

(Crossposted from the CERT.at blog)

Back in January 2024, I was asked by the Belgian EU Presidency to moderate a panel during their high-level conference on cyber security in Brussels. The topic was the relationship between cyber security and law enforcement: how do CSIRTs and the police / public prosecutors cooperate, what works here and where are the fault lines in this collaboration. As the moderator, I wasn’t in the position to really present my own view on some of the issues, so I’m using this blogpost to document my thinking regarding the CSIRT/LE division of labour. From that starting point, this text kind of turned into a rant on what’s wrong with IT Security.

When I got the assignment, I recalled a report I had read years ago: “Measuring the Cost of Cybercrime” by Ross Anderson et al from 2012. In it, the authors try to estimate the effects of criminal actors on the whole economy: what are the direct losses and what are costs of the defensive measures put in place to defend against the threat. The numbers were huge back then, and as various speakers during the conference mentioned: the numbers have kept rising and rising and the figures for 2024 have reached obscene levels. Anderson et al write in their conclusions: “The straightforward conclusion to draw on the basis of the comparative figures collected in this study is that we should perhaps spend less in anticipation of computer crime (on antivirus, firewalls etc.) but we should certainly spend an awful lot more on catching and punishing the perpetrators.”

Over the last years, the EU has proposed and enacted a number of legal acts that focus on the prevention, detection, and response to cybersecurity threats. Following the original NIS directive from 2016, we are now in the process of transposing and thus implementing the NIS 2 directive with its expanded scope and security requirements. This imposes a significant burden on huge numbers of “essential” and “important entities” which have to heavily invest in their cybersecurity defences. I failed to find a figure in Euros for this, only the estimate of the EU Commission that entities new to the NIS game will have to increase their IT security budget by 22 percent, whereas the NIS1 “operators of essential services” will have to add 12 percent on their current spending levels. And this isn’t simply CAPEX, there is a huge impact on the operational expenses, including manpower and effects on the flexibility of the entity.

This all adds up to a huge cost for companies and other organisations.

What is happening here? We would never ever tolerate that kind of security environment in the physical world, so why do we allow it to happen online?

The physical world

So, let’s look at playing field in the physical environment and see how the security responsibilities are distributed there:

Defending against low-level crime is the responsibility of every citizen and organisation: you are supposed to lock your doors, you need to screen the people you’re allowing to enter and the physical defences need to sensible: Your office doesn’t need to be a second Fort Knox, but your fences / doors / gates / security personnel need to be adequate to your risk profile. They should be good enough to either completely thwart normal burglars or at least impose such a high risk to them (e.g., required noise and time for a break-in) that most of them are deterred from even trying.

One of the jobs of the police is to keep low-level crime from spiraling out of control. They are the backup that is called by entities noticing a crime happening. They respond to alerts raised by entities themselves, their burglar alarms and often their neighbours.

Controlling serious, especially organized crime is clearly the responsibility of law enforcement. No normal entity is supposed to be able to defend itself against Al Capone style gangs armed with submachine guns. This is where even your friendly neighbourhood cop is out of his league and the specialists from the relevant branches of the security forces need to be called in. That doesn’t mean that these things never happen at all: there is organized crime in the EU, and it might take a few years before any given gang is brought under control.

Defending against physical incursions by another country is the job of the military. They have the big guns; they have the training and thus means to defend the country from outside threats. Hopefully, they provide enough deterrence that they are not needed. Additionally, your diplomats and politicians have worked to create an international environment in which no other nation even contemplates invading your country.

We can see here a clear escalation path of physical threats and how the responsibility to deal with them shifts accordingly.

The online world

Does the same apply to cyber threats? And if not, why?

The basics

The equivalent of putting a simple lock on your door is basic cyber hygiene: Firewalls, VPNs, shielding management interfaces, spam and malspam filters, a decent patch management, as well as basic security awareness training. Hopefully, this is enough to stop being a target of opportunity, where script kiddies or mass exploitation campaigns can just waltz into your network. But there is a difference: the risk of getting caught simply for trying to hack into a network is very low. Thus, these actors can just keep on trying over and over again. Additionally, this can be automated and run on a global scale.
In the real word, intrusion attempts do not scale at all. Every single case needs a criminal on site and that limits the number of tries per night and incurs a risk of being caught at each and every one of these. The result is that physical break-in attempts are rare, whereas cyber break-in attempts are so frequent that the industry has decided that “successful blocks on FW or mail-relay level per day” are no longer sensible metrics for a security solution.

And just forget about reporting these to the police. Not all intrusion attempts are actually malicious (a good part of CERT.at’s data-feeds on vulnerabilities is based on such scans), the legal treatment of such acts are unclear (especially on an international level), and the sheer mass of it overwhelms all law enforcement capabilities. Additionally, these intrusion attempts usually are cross-border, necessitating an international police collaboration. The penalties for such activities (malicious scans, sending malspam, etc.) are also often too low to qualify for international efforts.

In the physical world, the perpetrators must be present at the site of their victims. We’re not yet at the stage where thieves and burglars send remote controlled drones to break into houses and steal valuables there – unless you count the use of hired and expendable low-level criminals as such. There is thus no question about jurisdiction and the possibility of the local police to actually enforce the law. Collecting clues and evidence might not always be easy, and criminals fleeing the country before being caught is a common trope in crime literature, nevertheless there is the real possibility that the police can successfully track and then arrest the criminals.

The global nature of the Internet changes all this. As the saying goes: there is no geography on the Internet, everyone is a direct neighbour to everybody else. Just as any simple website is open to visitors from all over the world, it can be targeted by criminals from all over the globe. There is no need for the evil hackers to be on the same continent as their targets, let alone in the same jurisdiction. Thus, even if the police can collect all the necessary evidence to identify the perpetrators, it cannot just grab them off the street – they might be far out of reach of the local law enforcement.

And another point is different: usually, physical security measures are quite static. There is no monthly patch-day for your doors. I can’t recall any situation where a vendor of safes or locks had to issue an alert to all customers that they have to upgrade to new cylinders because a critical vulnerability was found in the current version (although watching LPL videos are a good argument that they should start doing that). Recent reports on vulnerabilities of keyless fobs for unlocking of cars show that the lines are starting to blur between these worlds.

Organized crime

What about serious, organized crime? The online equivalent to a mob boss is a “Ransomware as a Service (RaaS)” group: they provide the firepower, they create an efficient ecosystem of crime and they make it easier for low-level miscreants to start their criminal careers. Examples are Locky, REvil, DarkSide, LockBit, Cerber, etc. Yes, sometimes law-enforcement, through long-running, and international collaborations between law-enforcement agencies, is able to crack down on larger crime syndicates. Those take-downs vary in their effectiveness. In some cases, the police manages to get hold of the masterminds, but often enough they just get lower or mid-level people and some of the technical infrastructure, leading just to a temporary reprieve for the victims of the RaaS shop.

Two major impediments to the effectiveness of these investigations are the global nature of such gangs and thus the need for truly global LE collaboration and the ready availability of compromised systems to abuse and malicious ISPs who don’t police their own customers. Any country whose police force is not cooperating effectively creates a safe refuge for the criminals. The current geo-political climate is not helpful at all. Right now, there simply is no incentive for the Russian law enforcement to help their western colleagues by arresting Russian gangs targeting EU or US entities. Bullet-proof hosters are similar, they rent the infrastructure to criminals from which to launch attacks from. And often enough the perpetrators simply use the infrastructure of one of their victims to attack the next.

The end result is that serious cybercrime is rampant. Companies and other organisations must defend themselves against well-financed, experienced, and capable threat-actors. As it is, law enforcement is not capable to lower the threat level low enough to take that responsibility away from the operators.

Nation states

The next escalation step are the nation state attackers. They come in (at least) two types: Espionage and Disruption.

Espionage is nothing new; the employment of spies traces back to antique world. But just as with cybercrime, in the new online world it is no longer necessary to send agents on dangerous missions into foreign countries. No, a modern spy has a 9 to 5 desk job in drab office building where the highest risk to his personal safety is a herniated vertebral disc caused by unergonomic desks and chairs.

It’s been rare, but cyber-attacks with the aim of causing real world disruptions have appeared over the last ten years, especially in the Russia/Ukraine context. The impact can be similar to Ransomware: the IT systems are disabled and all the processes supported by those system will fail. The main difference is that you can’t simply buy your way out of a state-sponsored disruptive attack. There have been cases where the attackers try to inflict physical damage to either the IT systems (bricking of pcs in the Aramco attack) or machinery controlled by industrial control systems.

This is a frustrating situation. We’re in a defensive mode, trying to block and thwart attack after attack from well resourced adversaries. As the recent history shows, we are not winning this fight – cybercrime is rampant and state-sponsored APTs are running amok. Even if one organisation manages to secure its own network, the tight interconnectedness with and dependency of others will leave it exposed to supply chain risks.

What can we do about this?

Such a situation reminds me of the old proverb: “if you can’t win the game, change the rules”. I simply do not see a simple technical solution to the IT security challenge. We’ve been sold these often enough under various names (firewalls, NGFW, SIEMs, AV, EDR, SOAR, cloud-based detection, sandboxes to detected malicious e-mail, …) and while all these approaches have some value, they are fighting the symptoms, but not the cause of the problem.

There certainly are no simple solutions, and certainly none without significant downsides. I’m thus not proposing that the following ideas need to be implemented tomorrow. This article is just supposed to move the Overton Window and start a discussion outside the usual constraints.

So, what ideas can I come up with?

Really invest in Law Enforcement

The statistics show every year that cyber-crime is rising. This is followed by a ritual proclamation of the minister in charge that we will strengthen the police force tasked with prosecuting cyber-crime. The follow-through just isn’t there. Neither the police, nor the judiciary is in any way staffed to really make a dent in cybercrime as a whole.

They are fighting a defensive war, happy with every small victory they can get, but overall they are simply not staffed at a level where they really could make a difference.

Denial of safe havens

Criminals or other attackers need some infrastructure where they stage their attacks from. Why do we tolerate this? Possible avenues for a change are:

  • Revisit the laws that shield ISPs from liabilities regarding misbehaving customers. This does not need to be a complete reversal, but there need to be clear and strong incentives not to allow customers to stage attacks from an ISP’s network. See below for more details.
  • And on the other side, refuse to route the network blocks from ISPs who are known to tolerate criminals on their network. Back on Usenet, this was called the “UDP – Usenet Death Penalty”: when you don’t police your own users’ misbehaviour on this global discussion forum, then other sites will decide not to accept any articles from your cesspool any more.

The aim must be the end of “bulletproof” hosters. There have been prior successes in this area, but we can certainly do better on a global scale.

Don’t spare abused systems

Instead of renting infrastructure from bulletproof hosting outfits, the criminals often hack into an unrelated organisation and then abuse its systems to stage attacks from. Abused systems range from simple C2 proxies on compromised websites, DDoS-amplification, accounts for sending spam-mails to elaborate networks of proxies on compromised CPEs.

These days, we politely warn the owners of the abused devices and ask them nicely to clean up their infrastructure.

We treat them as victims, and not as accomplices.

Maybe we need to adjust that approach.

Mutual assured cyber destruction

As bad as the cold war was, the concept of mutual assured destruction managed to deter the use of nuclear weapons for over 70 years. Right now, there is no functioning deterrence on the Internet.

I can’t say what we need to do here, but we must create a significant barrier to the employment of cyberattacks. Right now, most offensive cyber activities are considered “trivial offences”, maybe worth a few sternly worded statements, but nothing more. The EU Cyber Diplomacy Toolbox is a step in that direction, but is still rather harmless in its impact.

We can and should do more.

Broken Window Theory

From Wikipedia: “In criminology, the broken windows theory states that visible signs of crime, antisocial behavior, and civil disorder create an urban environment that encourages further crime and disorder, including serious crimes.”

To put this bluntly: As we haven’t managed to solve the Spam E-mail problem, why do we think we can tackle the really serious crimes?

Thus, one possible approach is to set aside some investigative resources in the law enforcement community to go after the low-level, but very visible criminals. Take for example the long running spam waves promoting ED pills. Tracking the spam source might be hard, but there is a clear money trail on the payment side. This should be an eminently solvable problem. Track those gangs down, make an example out of them and let every other criminal guess where the big LE guns will be pointing at next.

As a side effect, the criminal infrastructure providers who support both the low level and the more serious cybercrime might also feel the heat.

Offer substantial bounties

We always say that ransomware payments are fuelling the scourge. They provide RaaS gangs with fresh capital to expand their operations and it is a great incentive for further activities in that direction.

So, what about the following: decree by law that if you’re paying a ransom, then you have to pay 10% of the ransom into a bounty fund that incites operators in the ransomware gangs to turn in their accomplices.

Placing bounties on the head of criminals is a very old idea and has proven to be effective to create distrust and betrayal in criminal organisations.

Liability of Service Providers

Criminals are routinely abusing the services offered by legitimate companies to further their misdeeds. Right now, the legal environment is shielding the companies whose services are abuse, from direct liability regarding the action of their customers.

Yes, this liability is usually not absolute, often there is a “knowingly” or “repeatedly” or “right to respond to allegations” in the law that absolve the service providers to proactively search for or quickly react to reports of illegal activities originating from their customers.

We certainly can have a second look at these provisions.

Not all service providers should be treated the same way, a small ISP offering to hosts websites has vastly smaller resources to deal with abuse that the hyper-scalers with billions of Euros stock market valuations. The impact of abuse scales about the same way: a systematic problem at Google is much more relevant than anything a small regional ISP can cause.

Spending the same few percentage points of their respective revenue on countering abuse can give the abuse handling teams of big operators the necessary punch to really be on top of abuse at their platform and do it 24×7 in real-time.

We need to incentivise all actors to take care of the issue.

Search Engine Liability

By using SEO techniques or via simply buying relevant advertisement slots, criminals sometimes manage to lure people looking for legitimate free downloads to fake download sites that offer backdoored versions of the programs that the user is looking for.

Given the fact that this is a very lucrative market for search engine operators, there should be no shortage on resources to deal with this abuse either proactively or in near real time when they are reported.

And I really mean near real-time. Given e.g., Google’s search engine revenue, it is certainly possible to resolve routine complaints within 30 minutes, on a 24×7 coverage. If they are not able to do it, make them both liable for damages caused by their inaction and impose regulatory fines on them.

For smaller companies, the response time requirements can be scaled down to levels that even a mom & pop ISP can handle.

Content Delivery Network liability

The same applies to content delivery networks: such CDNs are often abused to shield criminal activities. By hiding behind a CDN, it becomes harder to take down the content at the source, it becomes tricky to just firewall off the sewers of the Internet and even simple defensive measures like blocking JavaScript execution by domain are disrupted if the CDN serves scripts from their domains.

Cloudflare boasts that a significant share of all websites is now served using their infrastructure. Still, they only commit to a 24h reaction time on abuse complaints for things like investment fraud.

With great market-share comes great responsibility.

We really need to forcibly re-adjust their priorities. It might be a feel-good move for libertarians to enable free speech, and sometimes controversial content really needs protection. But Cloudflare is acting like a polluter who doesn’t really care what damage their actions cause on others.

Even in the libertarian heaven, good behaviour is triggered by internalizing costs by making liabilities explicit.

Webhoster liability

The same applies to the actual hosters of malicious content. In the western world, we need to give webhosters a size-dependent deadline for reacting to abuse-reports. For the countries who do not manage to create and enforce similar laws, the rest of the world need to react by limiting the reachability of non-conforming hosters.

Keeping the IT market healthy

Market monopolies are bad for security. They create a uniform global attack surface and distort the security incentives. This applies both to the software, the hardware/firmware side, the cloud as well as to the ISP ecosystem.

What can the military do?

In the physical word, the military is the ultimate deterrence against nation state transgressions. This is really hard to translate to cyber-security. I mentioned MAD above. This is really tricky: what is the proper way of retaliation? How do we avoid a dangerous escalation of hack, hack-back and hack-back-back?

Or should we relish in the escalation? A colleague recently mentioned that some ransomware gang claimed to have hacked the US Federal Reserve and is threatening to publish terabytes of stolen data. I half joked by replying with “If I were them, I’d start to worry about a kinetic response by the US.”

There are precedents. Some countries are well known to react violently if someone decides to take one of their citizens as hostage. No negotiations. Only retribution with whatever painful means are available.

Some cyber-attacks have a similar impact as violent terrorist attacks, just look at the ripple on effect on hospitals in London following the attack on Synnovis. So why should our response portfolio against ransomware actors rule out some the options we keep open for terrorists?

Free and open vs. closed and secure

Overall, there seems to be two major design decision that have a major cyber security impact.

First, the Internet is a content-neutral, global packet-switched network, for which there is only a very limited consensus regarding the rules that its operators and users should adhere to. And there are even fewer global enforcement possibilities for the little rules that we can agree on.

On one hand, this is good. We do not want to live in a world where the standards for the Internet are set and enforced by oppressive regimes. The global reach of the Internet is also a net positive: it is good that there is a global communication network that interconnects all humans. Just as the phone network connects all countries, the global reach of the Internet has the potential to foster communication across borders and can bring humanity together. We want dissidents in Russia and China to be able to communicate with the outside world.

On the other hand, this leads to the effects described in the first section: geography has no meaning on the Internet; thus, we’re importing the shadiest locations of the Internet right into our living rooms.

We simply can’t have both: a global, content agnostic network that reaches everybody on the planet, and a global network where the behaviour that we find objectionable is consistently policed.

The real decision is thus where to compromise: On “global”, by e.g. declining to be reachable from the swamps of the Internet, or on “security”: live with the dangers that arise from this global connectivity.

The important part here is: this is a decision we need to take. Individually, as organisation and, perhaps, as a country.

We face a similar dilemma with our computing infrastructure: The concept of the generic computer, the open operating systems, the freedom to install third-party programs and the availability of accessible programming frameworks plus a wealth of scripting languages are essential for the speed of innovation. A closed computing environment can never be as vibrant and successful.

The ability to run arbitrary new code is both a boon for innovation, but also creates the danger of malicious code being injected into our system. Retrofitting more control here (application allowlisting, signed applications, strong application isolation, walled garden app-stores, …) can mitigate some of the issues, but will never reach the security properties of system that was designed to run exactly one application and doesn’t even contain the foundations for running additional code.

Again, there is a choice we need to make: do we prefer open systems with all their dangers, or do we try to nail things down to lower the risks? This does not need to be a global choice: we should probably choose the proper flexibility vs. security setting depending on intended use of an IT system. A developer’s box needs not have the same setting as a tablet for a nursing home resident.

Technical solutions – just don’t be easily hackable?

In an ideal world, our IT systems would be perfectly secure and would not be easy pray for cyber-criminals and nation state actors. Yes, any progress in securing our infrastructure is welcome, but we cannot simply rely on this path. Nevertheless, there are a few low hanging fruits we need to take:

Default configurations: Networked devices need to come with defaults that are reasonably secure. Don’t expect users to go through all configuration settings to secure a product that they bought. This can be handled via regulation.

Product liability is also an interesting approach. This is not trivial to get right, but certain classes of security issues are so basic that failing to protect against them amounts to gross negligence in 2024. For example, we recently saw several path traversal vulnerabilities in edge-devices sold in 2024 by security companies with more than a billion-dollar market cap. Sorry, such bugs should not happen in this league.

The Cyber Resilience Act is an attempt to address this issue. I have no clue whether it will actually work out well.

While I hope that we will manage to better design and operate our critical IT infrastructure in the future, this is not the part where I’d put my money on. We’ve been chasing that goal for the last 25 years and it hasn’t been working out so great.

We really need to start thinking outside the box.

Categories
CERT Pet Peeves

On Cybersecurity Alert Levels

Last week I was invited to provide some input to a tabletop exercise for city-level crisis managers on cyber security risks and the role of CSIRTs. The organizers brought a color-coded threat-level sheet (based on the CISA Alert Levels) to the discussion and asked whether we also do color-coded alerts in Austria and what I think of these systems.

My answer was negative on both questions, and I think it might be useful if I explain my rationale here. The first was rather obvious and easy to explain, the second one needed a bit of thinking to be sure why my initial intuition to the document was so negative.

Escalation Ratchet

The first problem with color-coded threat levels is their tendency to be a one-way escalation ratchet: easy to escalate, but hard to de-escalate. I’ve been hit by that mechanism before during a real-world incident and that led me to be wary of that effect. Basically, the person who raises the alert takes very little risk: if something bad happens, she did the right thing, and if the danger doesn’t materialize, then “better safe than sorry” is proclaimed, and everyone is happy, nevertheless. In other words, raising the threat level is a safe decision.


On the other hand, lowering the threat level is an inherently risky decision: If nothing bad happens afterwards, there might be some “thank you” notes, but if the threat materializes, then the blame falls squarely on the shoulders of the person who gave the signal that the danger was over. Thus, in a CYA-dominated environment like public service, it is not a good career move to greenlight a de-escalation.


We’ve seen this process play out in the non-cyber world over the last years, examples include

  • Terror threat level after 9/11
  • Border controls in the Schengen zone after the migration wave of 2015
  • Coming down from the pandemic emergency

That’s why I’ve been always pushing for clear de-escalation rules to be in place whenever we do raise the alarm level.

Cost of escalation

For threat levels to make sense, any level above “green” need to have a clear indication what the recipient of the warnings should be doing at this threat level. In the example I saw, there was a lot of “Identify and patch vulnerable systems”. Well, Doh! This is what you should be doing at level green, too.


Thus, relevant guidance at higher level needs to be more than “protect your systems and prepare for attacks”. That’s a standing order for anyone doing IT operation, this is useless advice as escalation. What people need to know is what costs they should be willing to pay for a better preparation against incidents.


This could be a simple thing like “We expect a patch for a relevant system to be released out of our office-hours, we need to have a team on standby to react as quickly as possible, and we’ve willing to pay for the overtime work to have the patch deployed ASAP.”. Or the advice could be “You need to patch this outside your regular patching cadence, plan for a business disruption and/or night shifts for the IT people.” At the extreme end, it might even be “we’re taking service X out of production, the changes to the risk equation mean that its benefits can’t justify the increased risks anymore.”.


To summarize: if there were no hard costs to a preventative security measure, then you should have implemented them a long time ago, regardless of any threat level board.

Counterpoint

There is definitely value in categorizing a specific incident or vulnerability in some sort of threat level scheme: A particularly bad patch day, or some out-of-band patch release by an important vendor certainly is a good reason that the response to the threat should also be more than business-as-usual.


But a generic threat level increase without concrete vulnerabilities listed or TTPs to guard against? That’s just a fancy way of saying “be afraid” and there is little benefit in that.

Postscript: Just after posting this article, I stumbled on a fediverse post making almost the same argument, just with April 1st vs. the everyday flood of misinformation.

Categories
Internet Pet Peeves

Kafka wohnt in der Lassalle 9

Ich bin wohl nicht der einzige technikaffine Sohn / Schwiegersohn / Neffe, der sich um die Kommunikationstechnik der älteren Generation kümmern muss. In dieser Rolle habe ich gerade was hinreichend Absurdes erlebt.

Angefangen hat es damit, dass A1 angekündigt hat, den Telefonanschluss einer 82-jährigen Dame auf VoIP umstellen zu wollen. Das seit ewigen Zeiten dort laufende „A1 Kombi“ Produkt (POTS + ADSL) wird aufgelassen, wir müssen umstellen. Ok, das kam jetzt nicht so wirklich überraschend, in ganz Europa wird das klassische Analogtelefon Schritt für Schritt abgedreht, um endlich die alte Technik loszuwerden.

Also darf ich bei A1 anrufen, und weil die Dame doch etwas an ihrer alten Telefonnummer hängt, wird ein Umstieg auf das kleinste Paket, das auch Telefonie beinhaltet, ausgemacht. Also „A1 Internet 30“. 30/5 Mbit/s klingt ja ganz nett am Telefon, also bestellen wir den Umstieg (CO13906621) am 18.11.2023. Liest man aber die Vertragszusammenfassung, die man per Mail bekommt, so schaut das so aus:

Da die Performance der alten ADSL Leitung auch eher durchwachsen und instabil war (ja, die TASL ist lang), erwarte ich eher den minimalen Wert, was einen Faktor 2 bzw. 5 weniger ist als beworben. Das Gefühl, hier über den Tisch gezogen worden zu sein, führt zu dem Gespräch: „Brauchst du wirklich die alte Nummer? Die meisten der Bekannten im Dorf sind doch schon nur mehr per Handy erreichbar.“

Ok, dann lassen wir die Bedingung „Telefon mit alter Nummer“ sausen und nehmen das Rücktrittsrecht laut Fern- und Auswärtsgeschäfte-Gesetz in Anspruch und schauen uns nach etwas Sinnvollerem um. An sich klingt das „A1 Basis Internet 10“ für den Bedarf hier angebracht, aber wenn man hier in die Leistungsbeschreibungen schaut, dann werden hier nur „0,25/0,06 Mbit/s“ also 256 kbit/s down und 64 kbit/s up wirklich zugesagt. Meh. So wird das nichts, daher haben wir den Umstieg storniert und den alten Vertrag zum Jahresende gekündigt – was auch der angekündigte POTS-Einstellungstermin ist.

Der Rücktritt und die Kündigung wurden telefonisch angenommen und auch per Mail bestätigt.

So weit, so gut, inzwischen hängt dort ein 4g Modem mit Daten-Flatrate und VoIP-Telefon, was im Großen und Ganzen gut funktioniert.

Die nächste Aktion von A1 hatte ich aber nicht erwartet: In der Schlussabrechnung nach der Kündigung von Mitte Jänner war folgender Posten drinnen:

"Restgeld für vorzeittige Vertragsauflösung: 381 €

Und da 82-jährige manchmal nicht die besten E-Mail Leserinnen sind, ist das erst aufgefallen, als die Rechnung wirklich vom Konto eingezogen wurde.

Das „Restentgelt“ macht in mehrerer Hinsicht keinen Sinn: der „A1 Kombi“ Vertrag läuft seit mehr als 10 Jahren, und ich hatte bei der initialen Bestellung auch gefragt, ob irgendwelche Vertragsbindungen aktiv sind. Und das Ganze hat überhaupt erst angefangen, weil A1 die „A1 Kombi“ einstellt, aber jetzt wollen sie uns genau dieses aufgelassene Produkt bis Ende 2025 weiterverrechnen.

Also ruf ich bei der A1 Hotline an, in der Annahme, dass man dieses Missverständnis schnell aufklären kann, wahrscheinlich hat einfach das Storno des Umstiegs den Startzeitpunkt des Vertrags im System neu gesetzt. So kann man sich täuschen:

  • Per Telefon geht bei ex-Kunden rein gar nichts mehr. Der Typ an der Hotline hat komplett verweigert, sich die Rechnung auch nur anzusehen.
  • Man muss den Rechnungseinspruch schriftlich einbringen. Auf die Frage nach der richtigen E-Mail-Adresse dafür war die Antwort „Das geht nur über den Chatbot.“
  • Also sagte ich der „Kara“ so lange, dass mir ihre Antworten nicht weiterhelfen, bis ich einen Menschen dranbekomme, dem ich dann per Upload den schriftlichen Einspruch übermittle.
  • Nach Rückfrage bei der RTR-Schlichtungsstelle haben wir den Einspruch auch noch schriftlich per Einschreiben geschickt.

Wir haben hier ein Problem.

Ein Konzern zieht einer Pensionistin 400+ EUR vom Konto ein, weil sie einen Fehler in ihrer Verrechnung haben, und verweigern am Telefon komplett, sich das auch nur anzusehen. Laut RTR haben sie 4 Wochen Zeit, auf die schriftliche Beschwerde zu reagieren.

Ja, wir könnten bei der Bank den Einzug Rückabwickeln lassen, aber da ist dann A1 (laut Bank) schnell beim KSV und die Scherereien wollen wir auch nicht. Sammelklagen gibt es in Österreich nicht wirklich. Ratet mal, wer da dagegen Lobbying macht. Schadenersatz für solche Fehler? Fehlanzeige.

So sehr das US Recht oft idiotisch ist, die Drohung von hohen „punitive damages“ geht mir wirklich ab. Wo ist die Feedbackschleife, dass die großen Firmen nicht komplett zur Service-Wüste werden?

Wenn ich aus Versehen bei einer Garderobe den falschen Mantel mitnehme, und dem echten Eigentümer, der mich darauf anspricht nur ein „red mit meinem chatbot oder schick mir einen Brief, in 4 Wochen kriegst du einen Antwort“ entgegne, dann werde ich ein Problem mit dem Strafrecht bekommen.

Wie lösen wir sowas in Österreich? Man spielt das über sein Netzwerk. Mal sehen, wie lange es nach diesem Blogpost (plus Verteilung des Links an die richtigen Leute) braucht, bis jemand an der richtigen Stelle sagt „das kann’s echt nicht sein, liebe Kollegen, fixt das jetzt.“.

Update 2024-02-09: Eine kleine Eskalation über den A1 Pressesprecher hat geholfen. Die 1000+ Kontakte auf LinkedIn sind dann doch zu was gut.

Update 2024-04-05: Schau ich doch mal kurz auf die A1 Homepage, und was seh ich?

Gerichtsbeschluss

Anscheinend war des dem VKI auch zu bunt, wie unseriös die A1 mit Bandbreiten geworben hat.

Categories
Pet Peeves Windows 10

Win10: controlled folder access

I’ve enabled controlled folder access on my work Windows 10 machine.  Now it is giving me notifications like:

win10-ransomare

The idea is fine, of course I need to finetune the settings (and one click brings to the relevant settings page), but how the §$%& should I know which program to white-list? I cannot find a way to get the full path of the offending program.

I need to search.

Why is there no “Application ‘full path’ is trying to make changes: whitelist y/n” dialogue?

Categories
Pet Peeves Windows 10

Windows 10 peeves

I’m not a computer newbie. Neither am I a first-time Windows user.

Windows 10 has proved to be a very mixed bag for me. Some things are very clever and nicely done, and then there are a bunch of “what the f*ck were they thinking” moments for me.

I’ve now added the category “Windows 10” to this blog to keep track of my peeves and my workarounds.

 

Categories
CERT Internet Pet Peeves

Of Ads and Signatures

Both advertisements and signatures have been with us in the analogue realm for ages, the society has learned about their usefulness and limits. We have learned that it’s (usually) hard to judge the impact of a single ad, and that the process of actually validating a contested signature is not trivial. There is a good reason why the law requires more than a single signature to authorize the transfer of real estate.

Both concepts have been translated to the digital, online world.

And in both cases, there were promises that the new, digital and online versions of ads/signatures can deliver features that their old, analogue counterparts could not do.

For ads, it was the promise of real-time tracking of their effectiveness. You could do “clickthrough rates” measuring how often the ad was clicked by a viewer. The holy grail of ad effectivity tracking is the fabled “conversion rate”: you can measure how many people actually bought your product after clicking an ad.

For signatures, it was the promise of automated validation. If you get a digitally signed document, you should be able to actually verify (and can have 100% trust in the result) whether it was really signed by the signatory. Remember: in the analogue world, almost nobody actually does this. The lay person can detect crude forgeries, but even that only if the recipient has access to samples of authentic signatures. In reality, a closer inspection of handwritten signatures is only done for important transactions, or in the case of a dispute.

So how did things work out after a few years of experience with digital ads and signatures?

Digital ads are in a midlife crisis. We’re in a death spiral of low clickthrough rates, more obnoxious formats, ad-blockers and ad-blocker-blockers. Just look at the emergence of Taboola and similar click-bait.
Digital signatures are at a cross-road as well: The take-up rates of solutions based on smart-card readers have been underwhelming so far. This applies to the German ePerso as well as to the Austrian citizen card. The usability just isn’t there. So there has been a push to increase the take-up rate by introducing alternatives to smart-card technology. That won’t make it more secure. Not at all.

So what’s the common lesson? In theory, both digital ads and signatures can offer features that their old, analogue counterparts just cannot deliver. In practice, they are killing themselves by over-promising. As long as click-through rates are the prime measurement of online ads, their death spiral will continue. And as long as digital signatures continue to promise instant, high-confidence validation, they will not achieve the take-up rates needed for broad acceptance.

Continuing the current trajectory will lead to a hard crash for both technologies. On one side it’s the ad-blocker, on the other side it’s malware.

The level of spam in the email ecosystem meant that no mail-service can exist in the market without a built-in spam-filtering solution. And given the early ad-excesses every browser includes a pop-up blocker these days. If the advertisers continue on their path of getting attention at all costs, ad-blocking will become a must-have feature in all browsers, and not just an optional ad-on (as right now).

Any digital security solution that protects actual money has been under vigorous attack. The cat and mouse game between online-banking defenders and attackers is a good lesson. The same will happen if digital signature solutions start to be actually relevant. And good luck if your “let’s make digital signatures more user-friendly” approach is actually less secure than what online-banking is using these days.

So what’s the solution?

In my opinion the right way to approach both topics is to reduce the promise. Make digital ads static images. No animation. No dynamic loading of js-code (which is its own security nightmare). Don’t overtax the visitor’s resources (bandwidth, browser-performance). No tracking. Tone it down. Don’t expect instant effects. Don’t promise clicktrough rate.

For digital signatures: Mass deployment is only possible for “non-qualified signatures”. Don’t promise “you can fully and solely rely on our solution”. Just sell “this is a good indication”, or “use this as one factor in your security design”. Prepare for it to be attacked and broken. Only use it when you have a pre-planned way to recover from such a breach. The real word is full of applications where signatures are used in a very low-security / low-impact settings. The state-sponsored digital identity solutions needs to think of those, too. For the high-impact, high-confidence settings I always have to think about the mantra we use at work: “You can’t mandate trust.”

Categories
CERT Pet Peeves

The Edge browser

Wasn’t one of the main goals of junking the Internet Explorer codebase and building a brand new browser “Edge” the hope that there won’t be the monthly batch of patches for remote code execution vulnerabilities?

I haven’t tabulated the advisories but somehow I don’t have the feeling that things have gotten substantially better.

Why?

It looks to me like we still aren’t using the right programming environments for such complex pieces of software. There is still way too much basic security tooling the programmers have to do by themselves. Just like you shouldn’t do string operations in pure ANSI C, we need to rise the level of abstractions that all these browser bugs (that lead to RCE) just are not possible any more.

Categories
Pet Peeves

StartSSL, S/MIME and Thunderbird

This cost me an hour or two:

If you try to get a free S/MIME certificate from StartCom / StartSSL, this worked fine and in Firefox the certificate was shown as valid. But once I transferred it to Thunderbird, I got an unspecified certificate error.

Solution: Turn off OCSP.

Categories
Pet Peeves

Positive Surprise …

Wow, they put up new ticket vending machines at Brussels Central train station.

So instead of accepting only the Belgian-only cards, they finally work with international cards (Maestro, master/Visa), too.

Progress indeed. Welcome to 2014.

Categories
CERT Internet Pet Peeves

Adobe Flash Updates

Today we’ve seen yet another Adobe Flash Player update due to serious problems. So be it.

One thing always sets my teeth on edge when doing/verifying it: The number of clicks it takes me to check the version of the currently running Flash plugin. It would be far too simple if the download page (which is prominently linked everywhere) would just tell me if I need to upgrade. No, I have to click on “Learn more about Flash Player”, then “Product Page”, then “FAQ”, then scroll down to “How can I tell if I have the latest version of Flash Player installed and whether it is working correctly?”, click to reveal the answer, then click “Testing page”.

And then I have to manually compare the version shown above with the latest version listed by operating system and browser beneath.

What the fscking hell is Adobe thinking? They’ve been a top reason for PC infections for the last years (Flash + Acrobat) and still they don’t tell their web-visitors as quickly and efficiently whether they need to update.

Can someone please administer the necessary clues with a suitable LART?