90 percent of the 1215 participants in a consultation of the EU Commission on their white paper for a European AI concept consider concerns about the impairment of fundamental rights to be important or very crucial. 87 percent also assume that the results will be discriminatory. 82 and 78 percent of those questioned also considered (very) important the risk that AI would endanger security or lead to measures that could not be traced.
Don’t miss any news! With our daily newsletter you will receive all heise online news from the past 24 hours every morning.
Subscribe to the newsletter now
A third for a ban
The majority of those involved perceive systems that enable remote biometric identification, such as automated facial recognition according to the analysis released on Friday as dubious. 28 percent spoke in favor of a general ban on this technology in public spaces. Another 29.2 percent called for special EU law before such solutions could be used in public.
15 percent of the respondents agreed to allow biometric identification systems at most in certain cases and under clear conditions. 4.5 percent demanded particularly high requirements in order to tie down such requirements. Only 6.2 percent said that no further guidelines or regulations were required. In an early draft of the white paper, Commission Vice President for Digital Margrethe Vestager introduced a temporary ban on automated facial recognition in public spaces, but no longer considered such a step to be necessary.
Overall, 95 percent responded to questions about the regulation of artificial intelligence. 70 percent are concerned that AI lacks accuracy. 68 percent fear that they will not be compensated for damage caused by relevant systems. 42 percent of the participants therefore spoke in favor of an entirely new legal framework for technology, 33 for corrections to existing laws in order to close the gaps they identified.
Legislative package should follow
406 citizens took part in the consultation, which will soon result in a legislative package, in their own name, 352 as representatives of companies or business associations. Voices from civil society were present with 160 participants from NGOs or trade unions, for example, and 152 researchers from academic institutions.
Only three percent of the participants assume that the current legal basis is completely sufficient. 18 percent had a different, unspecified opinion, four percent had no opinion. Views were less clear on the scope of new laws. 42.5 percent agreed that new mandatory requirements should be limited to high-risk AI applications. 30.6 percent rated such an approach as insufficient. 54.6 percent of representatives from industry and business agreed to strictly regulate only high-risk procedures.
The Commission considers AI techniques that violate fundamental rights, cause personal harm or discriminate against people to be particularly dangerous. 59 percent of the participants supported this definition. 37 percent did not want to comment on the definition. The federal government recently requested a notification of accidents and incidents related to AI as part of the consultation.
A good 60 percent of the participants want the existing product liability directive to be revised to cover feared risks. 47 percent of those questioned are in favor of reforming the national liability rules for all AI applications. 16 percent advocate a specific approach to ensure adequate compensation for problems and a fair distribution of responsibilities.
Respondents named 78 percent cyber risks and 77 percent personal security risks as special AI-related dangers that should be considered in this area. Mental health risks followed with a share of 48 percent. 70 percent would like a risk assessment process for AI products that includes important changes throughout the entire life cycle. Voluntary seals of approval consider 50 percent of those involved very useful in applications that do not pose a high risk.
At the same time, the high-level expert group for AI has one final checklist for the use of trustworthy artificial intelligence published. The instrument is intended to support the implementation of their ethics guidelines based on key requirements such as human capacity to act and control, technical robustness and security, data protection and management, transparency, diversity, non-discrimination and fairness, ecological and social well-being and accountability.