The Hungarian Data Protection Authority (Nemzeti Adatvédelmi és Információszabadság Hatóság, NAIH) has recently published its full decision on the case in which the Authority imposed a fine ca. EUR 665,000 (HUF 250 million), the highest data protection fine to date in Hungary.
In the given case, the Authority assessed the use of an artificial intelligence (AI) solution by a Hungarian bank to analyze voice recordings of calls conducted between its customers and the call center. In its decision, the Authority highlighted the solution’s inefficiency in predicting the customers’ emotions accurately, as well as the fact that AI solutions can only be used for assessing emotional status of data subjects in rare situations, where the benefits of such use outweighs its potential negative effects (e.g. in certain cases of patient treatment). The decision of the Authority further underpinned that the bank only gave an overly general information on the data processing by AI and its data protection impact assessment (DPIA) and balancing test documentation did also not comply with the GDPR.
In the following lines, we present techniques to be applied, as well as aspects to be considered by similar companies using AI solutions, in order to avoid non-compliance with the GDPR and relevant data protection requirements. For more information on the details of the given case, please see our previous article ([link]).
With regard to the decision of the Authority, companies planning to use AI solutions for processing personal data must exercise extreme caution. This also means that such solutions should be gradually tested in cooperation with the manufacturer/service provider partner before being introduced by the company. Firstly, non-personal data should be used to feed the solution and personal data should only be used after effective testing in the above referred first phase with respect to the requirements and specificalities of the given industry and business. After the introduction of the solution, the company also needs to constantly monitor the efficiency of the solution and its effect on data subjects. This also means that the operation should be halted, and the solution needs to be recalibrated if deemed necessary, especially with regard to ineffective results, a possible data breach or another incident.
Companies using AI solutions also need to thoroughly draft the respective parts of their privacy policies to give adequate information on the use of the AI solution, the logic behind it and its effects on the rights and freedoms of the data subjects, as well as information on how long personal data are retained and in which form.
The DPIA documentation, as well as the balancing test of such companies should also clearly identify the interests of the given company, the rights and freedoms of data subjects affected by the processing, the risks threatening such rights and freedoms and the measures used by the company to reduce or eliminate the risks. Besides the above, an internal security policy specifying the necessary organizational and technical security measures also needs to be prepared by companies using AI. Such documentation could hugely help the staff members of the company using the solution understand the respective data security requirements and effectively apply techniques to reduce privacy-related risks.
With regard to the above, companies planning to use AI solutions need to carefully plan the use of the solution and the related data processing in order to avoid risks related to non-compliance with data protection and data security requirements.