German Federal IT security authority publishes guidelines for AI developers
The German Federal Office for Information Security (BSI) is already providing support with a whole series of statements on the subject of artificial intelligence (partly even in English).
It is therefore all the more gratifying that the BSI has in the meantime also addressed the question of how developers can practically protect machine learning systems from the most relevant threats and take adequate protective measures in a guideline.
The BSI distinguishes between three central threats in its guideline: Evasion attacks, attacks that aim to extract information, and backdoor attacks. These attacks will be briefly presented and illustrated in the following.
If the AI has been fed enough training data, the so-called inference phase follows. This is the application phase, so to speak, in which the AI is ready to search large amounts of data for patterns that carry meaning. In an evasion attack, the attacker attempts a disguised / evasive attack during the inference phase by performing a malicious input that causes the machine learning model to subsequently recognize incorrect patterns.
A practical example would be that a machine learning model sorts incoming e-mails into spam and non-spam (= benign e-mails). The attacker could force benign e-mails to be classified as spam or a malicious example not to be detected by misclassification.
Information Extraction Attacks
As mentioned previously, the first step of using machine learning models is to feed them with training data so that the model understands „what it is are actually about“.
With reference to the previous example, a machine learning model can, for example, first be trained to understand what spam and an email are in the first place and how to recognize them. These emails may, of course, contain personal or trade secret data. An information extraction attack therefore attempts to reconstruct the training data used for the machine learning model in order to take advantage of it. For example, it allows attackers to understand how the machine learning model itself works.
Poisoning and Backdoor Attacks
Related to the example above, in which the AI scans e-mails for spam and sorts it out, one could speak of a poisoning attack, for example, if the attacker feeds the machine learning model with training data for which the AI takes an extremely long time, as it can only analyze this data with extreme performance and or time expenditure.
The BSI recommends in its guideline protective measures for each type of attack in particular, but also for AI systems in general. The latter includes, for example, monitoring log files to check the system for anomalies, or the clear assignment of personnel responsibilities in the development and operating process of the AI system, and finally the creation of an emergency plan. In this context, the BSI refers to its IT-Grundschutz compendium for further (appropriate) IT security measures. This is an overall IT security protection concept similar to the industry IT security standard ISO 27001.
It is advisable to take the BSI guidelines as a basis as early as possible in the development process of AI systems in order to ensure a high level of IT security at an early stage.
In this context, companies should also pay attention to legislative developments at national and EU level (in particular the Artificial Intelligence Act and the Cyber Resilience Act) in order to ensure sufficient legal certainty for the AI system during both the development and operational phases. Further information on current EU digital legislation can be found here.
Advocate General at the CJEU: Concerning the appropriateness of technical and organisational measures and compensation for non-material damages in the event of a hacker attack
Advocate General at the Court of Justice of the European Union (CJEU), Giovanni Pitruzzella, published his opinion in case C-340/21 on 27. April 2023 regarding the conditions for compensation for non-material damages and the burden of proof for the appropriateness of technical organizational measures (TOMs) under Art. 32 GDPR in connection with a hacker attack.
Further expansion of competence in the field of IT security law consulting at Piltz Legal
As part of our consulting strategy, we at Piltz Legal are continuously expanding our expertise in the area of IT security law. When advising our clients, it is important for us not only to provide specialist legal know-how, but also to be able to speak the language of IT.