The EU is asking for feedback regarding the Implementing Acts that define some of the details of the NIS2 requirements with respect to reporting thresholds and security measures.
I didn’t have time for a full word-for-word review, but I took some time today to give some feedback. For whatever reason, the EU site does not preserve the paragraph breaks in the submission, leading to a wall of text that is hard to read. Thus I’m posting the text here for better readability.
Feedback from: Otmar Lendl
We will have an enormous variation in size of relevant entities. This will range from a 2-person web-design and hosting team who also hosts the domains of its customers to large multinational companies. The recitals (4) and (5) are a good start but are not enough.
The only way to make this workable is by emphasising the principle of proportionality and the risk-based approach. This can be either done by clearly stating that these principles can override every single item listed in the Annex, or consistently use such language in the list of technical and methodological requirements.
Right now, there is good language in several points, e.g., 5.2. (a) “establish, based on the risk assessment”, 6.8.1. “[…] in accordance with the results of the risk assessment”, 10.2.1. “[…] if required for their role”, or 13.2.2. (a) “based on the results of the risk assessment”.
The lack of such qualifiers in other points could be interpreted as that these considerations do not apply there. The text needs to clearly pre-empt such a reading.
In the same direction: exhaustive lists (examples in 3.2.3, 6.7.2, 6.8.2, 13.1.2.) could lead to auditors doing a blind check-marking exercise without allowing for entities to diverge based on their specific risk assessment.
A clear statement on the security objective before each list of measures would also be helpful to guide the entities and their auditors to perform the risk-based assessment on the measure’s relevance in the concrete situation. For example, some of the points in the annex are specific to Windows-based networks (e.g., 11.3.1. and 12.3.2. (b)) and are not applicable to other environments.
As the CrowdStrike incident from July 19th showed, recital (17) and the text in 6.9.2. are very relevant: there are often counter-risks to evaluate when deploying a security control. Again: there must be clear guidance to auditors to also follow a risk-based approach when evaluating compliance.
The text should encourage adoption of standardized policies: there is no need to re-invent the wheel for every single entity, especially the smaller ones.
Article 3 (f) is unclear, it would be better to split this up in two items, e.g.:
(f1) a successful breach of a security system that led to an unauthorised access to sensitive data [at systems of entity] by an external and suspectedly malicious actor was detected.
(Reason: a lost password shouldn’t cause a mandatory report, using a design flaw or an implementation error to bypass protections to access sensitive data should)
(f2) a sustained “command and control” communication channel was detected that gives a suspectedly malicious actor unauthorised access to internal systems of the entity.