German Federal IT security authority publishes guidelines for AI developers

The German Federal Office for Information Security (BSI) is already providing support with a whole series of statements on the subject of artificial intelligence (partly even in English).

It is therefore all the more gratifying that the BSI has in the meantime also addressed the question of how developers can practically protect machine learning systems from the most relevant threats and take adequate protective measures in a guideline.

The BSI distinguishes between three central threats in its guideline: Evasion attacks, attacks that aim to extract information, and backdoor attacks. These attacks will be briefly presented and illustrated in the following.

Evasion attack

If the AI has been fed enough training data, the so-called inference phase follows. This is the application phase, so to speak, in which the AI is ready to search large amounts of data for patterns that carry meaning. In an evasion attack, the attacker attempts a disguised / evasive attack during the inference phase by performing a malicious input that causes the machine learning model to subsequently recognize incorrect patterns.

A practical example would be that a machine learning model sorts incoming e-mails into spam and non-spam (= benign e-mails). The attacker could force benign e-mails to be classified as spam or a malicious example not to be detected by misclassification.

Information Extraction Attacks

As mentioned previously, the first step of using machine learning models is to feed them with training data so that the model understands „what it is are actually about“.

With reference to the previous example, a machine learning model can, for example, first be trained to understand what spam and an email are in the first place and how to recognize them. These emails may, of course, contain personal or trade secret data. An information extraction attack therefore attempts to reconstruct the training data used for the machine learning model in order to take advantage of it. For example, it allows attackers to understand how the machine learning model itself works.

Poisoning and Backdoor Attacks

Related to the example above, in which the AI scans e-mails for spam and sorts it out, one could speak of a poisoning attack, for example, if the attacker feeds the machine learning model with training data for which the AI takes an extremely long time, as it can only analyze this data with extreme performance and or time expenditure.

The BSI recommends in its guideline protective measures for each type of attack in particular, but also for AI systems in general. The latter includes, for example, monitoring log files to check the system for anomalies, or the clear assignment of personnel responsibilities in the development and operating process of the AI system, and finally the creation of an emergency plan. In this context, the BSI refers to its IT-Grundschutz compendium for further (appropriate) IT security measures. This is an overall IT security protection concept similar to the industry IT security standard ISO 27001.

 

It is advisable to take the BSI guidelines as a basis as early as possible in the development process of AI systems in order to ensure a high level of IT security at an early stage.

In this context, companies should also pay attention to legislative developments at national and EU level (in particular the Artificial Intelligence Act and the Cyber Resilience Act) in order to ensure sufficient legal certainty for the AI system during both the development and operational phases. Further information on current EU digital legislation can be found here.

Lawyer, Senior Associate
Johannes Zwerschke, LL.M.
Lawyer, Senior Associate
Johannes Zwerschke, LL.M.

Go back

News

New awards for our partners

We are very pleased that Prof. Dr. Burghard Piltz and Dr. Carlo Piltz have received further awards from the Handelsblatt and have been included in the 16th edition of The Best Lawyers in Germany™.

Board of German data protection authorities (“DSK”) publishes first guidelines on data protection for AI

The DSK guidance document "Artificial intelligence and data protection" (available in German here) primarily addresses controllers using AI, but also indirectly developers, manufacturers and providers of AI solutions. It provides an overview of relevant criteria from the perspective of the authorities but should not be understood as an exhaustive list of requirements. Nevertheless, the document contains references to a large number of different legal requirements.

The Legal 500 Germany: Dr. Carlo Piltz as leading name in data protection 2024

Once again Dr. Carlo Piltz is included among the leading names in the field of data protection in the latest edition of the Legal 500 Germany.