German Federal IT security authority publishes guidelines for AI developers

The German Federal Office for Information Security (BSI) is already providing support with a whole series of statements on the subject of artificial intelligence (partly even in English).

It is therefore all the more gratifying that the BSI has in the meantime also addressed the question of how developers can practically protect machine learning systems from the most relevant threats and take adequate protective measures in a guideline.

The BSI distinguishes between three central threats in its guideline: Evasion attacks, attacks that aim to extract information, and backdoor attacks. These attacks will be briefly presented and illustrated in the following.

Evasion attack

If the AI has been fed enough training data, the so-called inference phase follows. This is the application phase, so to speak, in which the AI is ready to search large amounts of data for patterns that carry meaning. In an evasion attack, the attacker attempts a disguised / evasive attack during the inference phase by performing a malicious input that causes the machine learning model to subsequently recognize incorrect patterns.

A practical example would be that a machine learning model sorts incoming e-mails into spam and non-spam (= benign e-mails). The attacker could force benign e-mails to be classified as spam or a malicious example not to be detected by misclassification.

Information Extraction Attacks

As mentioned previously, the first step of using machine learning models is to feed them with training data so that the model understands „what it is are actually about“.

With reference to the previous example, a machine learning model can, for example, first be trained to understand what spam and an email are in the first place and how to recognize them. These emails may, of course, contain personal or trade secret data. An information extraction attack therefore attempts to reconstruct the training data used for the machine learning model in order to take advantage of it. For example, it allows attackers to understand how the machine learning model itself works.

Poisoning and Backdoor Attacks

Related to the example above, in which the AI scans e-mails for spam and sorts it out, one could speak of a poisoning attack, for example, if the attacker feeds the machine learning model with training data for which the AI takes an extremely long time, as it can only analyze this data with extreme performance and or time expenditure.

The BSI recommends in its guideline protective measures for each type of attack in particular, but also for AI systems in general. The latter includes, for example, monitoring log files to check the system for anomalies, or the clear assignment of personnel responsibilities in the development and operating process of the AI system, and finally the creation of an emergency plan. In this context, the BSI refers to its IT-Grundschutz compendium for further (appropriate) IT security measures. This is an overall IT security protection concept similar to the industry IT security standard ISO 27001.

 

It is advisable to take the BSI guidelines as a basis as early as possible in the development process of AI systems in order to ensure a high level of IT security at an early stage.

In this context, companies should also pay attention to legislative developments at national and EU level (in particular the Artificial Intelligence Act and the Cyber Resilience Act) in order to ensure sufficient legal certainty for the AI system during both the development and operational phases. Further information on current EU digital legislation can be found here.

Lawyer, Senior Associate
Johannes Zwerschke, LL.M.
Lawyer, Senior Associate
Johannes Zwerschke, LL.M.

Go back

News

The Legal 500 Germany: Dr. Carlo Piltz as leading name in data protection 2024

Once again Dr. Carlo Piltz is included among the leading names in the field of data protection in the latest edition of the Legal 500 Germany.

ECJ ruling on VIN and general aspects of the term 'personal data'

The consequences of the ECJ's decision in Case C-319/22 (also referred to as the ‘Scania case’) of November 9, 2023 will certainly be discussed in the data protection scene for a long time to come. It is already visible that the judgment creates big waves in the automotive industry and related sectors, but also in the data protection community in general. However, it seems doubtable whether this is justified or whether essentially the same aspects as before the decision must be taken into account when clarifying the question of the existence of personal data. In the exact case dealt with by the ECJ, it will first be decided by the Regional Court of Cologne whether the VIN is indeed personal data for vehicle manufacturers and independent operators. The ECJ ruling itself does not yet provide a direct and unambiguous answer

Advocate General at the CJEU: Concerning the appropriateness of technical and organisational measures and compensation for non-material damages in the event of a hacker attack

Advocate General at the Court of Justice of the European Union (CJEU), Giovanni Pitruzzella, published his opinion in case C-340/21 on 27. April 2023 regarding the conditions for compensation for non-material damages and the burden of proof for the appropriateness of technical organizational measures (TOMs) under Art. 32 GDPR in connection with a hacker attack.