Tech

Cybercrime: Europol sheds light on the “dark side” of artificial intelligence

Criminals can use Artificial Intelligence (AI) to carry out cyber attacks more easily and thus cause more damage. Europol, the UN Interregional Institute for Crime and Justice Research (UNICRI) and the IT security company Trend Micro are warning of this in a joint study on the “malicious use” of the technology.

Criminals are likely to use AI according to the analysis in future be able to maximize their “chances of winning” in less time, exploit new victims and develop “innovative criminal business models”. At the same time, it reduces their risk of getting caught. “KI-as-a-Service”, i.e. ready-made IT solutions off the shelf, could lower the “entry barrier”: great skills and knowledge in dealing with artificial intelligence are no longer necessary.

The authors conclude that cyber criminals will, on the one hand, use AI as a means of attack. For example, you could use machine learning to automatically guess passwords and make malware even more effective. At the same time, poorly secured AI applications also served them as a worthwhile target for attacks.

The researchers dedicate a special section to the phenomenon of deep fakes. Realistic media content such as photos, audio and video that is changed and falsified with the help of AI is currently the best-known use of AI as an instrument for criminal attempts at deception. New control and defense technologies are required to reduce the risk of disinformation campaigns, Reduce extortion and threats targeting AI records.

AI could also be used to support compelling social engineering attacks on a large scale, the study found. There are already examples of how phishing attacks can be made more successful with sophisticated text fragments in high semantic quality. The researchers also warn of malware that could be used to read mass amounts of data from documents (“document scraping”).

Furthermore, “intelligent assistants” like Amazon Alexa, Apple Siri or Google Home are in principle vulnerable, it is said. In hacker forums, news about AI-controlled bots that are specifically intended for trading cryptocurrencies were also making the rounds.

According to the report, attacks with ransomware such as Emotet, in which the victims are selected even more specifically and protective mechanisms bypassed, are also likely to become more dangerous. The technology would also give criminals the means to bypass image recognition and voice biometrics.

Targeted “data pollution” is also conceivable, the authors state. For this, “blind spots” in algorithms could be identified and exploited. It is possible to disrupt anti-virus software, cameras and sensors in autonomous vehicles and facial recognition systems. The Google Maps hack by the artist Simon Weckert, which caused a virtual traffic jam on the map service with 99 decommissioned cell phones in a handcart and their location data, made it into the report.

Traditional hacking methods can be reinforced with neural networks, for example, and the traces of such attacks can be better hidden, the experts write. They refer to the open source tool DeepHack for automated penetration tests and the comparable system DeepExploit. With DeepLocker, IBM researchers have already presented an instrument that embeds AI capabilities directly in malware and can therefore hardly be seen from the outside. It is fitting that algorithms are often a black box on their own.

The three organizations recommend various countermeasures. For example, the potential of artificial intelligence as a means of combating crime should be tapped to a greater extent in order to make the cybersecurity industry and the police fit for the future. Research in the AI ​​field should be continued in order to develop defense technologies. In addition, it is advisable to rely on secure frameworks for the design of the technology right from the start and not arouse excessive expectations regarding its use for IT security purposes.

“AI promises the world more efficiency, automation and autonomy,” stated Edvardas Šileris, Head of Europol’s Cybercrime Center. “At a time when the public is increasingly concerned about the possible misuse of AI,” the threats must also be made transparent and harmless.

“Cyber ​​criminals have always been among the first to adopt the latest technology,” says Martin Rösler, Head of Future Threats Team at Trend Micro. “This is also the case with AI.” In addition to the many advantages, dangers from malicious use of technology are omnipresent, added Irakli Beridze, head of the Center for AI and Robotics at UNICRI. The aim of the study is to “shed light on the dark side of AI and stimulate further discussions on this important topic”.


(bme)

To home page

.