AI as a tool for analysing information
In addition to deceiving surveillance systems, criminals also use AI to analyse vast amounts of information. This includes stolen or leaked data, which enables attackers to detect security vulnerabilities or identify valuable targets. This precise data analysis increases the likelihood of successful attacks and can lead to significant financial losses.
AI could also be used to carry out targeted phishing or spear phishing campaigns. The AI models develop customized messages that are specifically tailored to their victims and aim to trick security employees into disclosing sensitive information. Attackers could also use AI to analyse large amounts of surveillance data and identify recurring patterns – such as vulnerabilities in the security staff's watch cycles.
As Wilfried Joswig, Managing Director of the German Association for Security Technology, adds, the effectiveness of AI depends heavily on the accuracy and relevance of the data provided: “If I provide the AI with incorrect or irrelevant data, the AI's decision will only be wrong.” This weakness can be exploited by attackers, for example by manipulating the weather data to change the response behaviour of the sensors. This could make it possible to overcome the perimeter protection without being detected. Joswig cites GPS data from sensors as another example: “If these are changed, the security personnel may receive an alarm, but at a completely wrong location.”
Strategies for defending against AI-based attacks
In view of these threats, the question arises as to how companies and organizations can defend themselves against AI-based attacks. Benjamin Körner sees the solution in a holistic approach: “One option is to use AI systems themselves to detect and ward off attacks – AI enables the timely detection of an attack and thus the equally timely initiation of countermeasures.” According to Körner, continuous training of employees with awareness of AI-based threats is also necessary so that they can recognize suspicious activities and act accordingly. A consistent cyber defence strategy – including, for example, regular firmware updates to close known security gaps – comprises a combination of different activities to make the overall system as secure as possible.
Wilfried Joswig emphasizes that, in principle, no new strategies or technologies are required to defend against attacks from AI, but that the already known measures from IT security, physical security and organizational measures must be implemented together. Systematic hardening of security systems, as is already common practice in IT security, should therefore also be applied in the area of perimeter protection. “Unfortunately, this holistic approach is not pursued in many cases, as the project participants only focus on their expertise and the performance features of their products. However, the overall solution must always be considered and not just individual security aspects or measures,” says Joswig.
Combination of AI and human intelligence
Ultimately, the following also applies to AI-based attacks: the more complex the attack vectors, the more comprehensive the defence mechanisms need to be. Each additional element increases the complexity of a security measure and therefore places greater demands on technology, configuration and organization.
Defence against AI-based attacks therefore requires a combination of artificial and natural intelligence. Even if AI is capable of efficiently detecting and defending against threats, the experience and knowledge of security experts is irreplaceable. It remains a race between “good and evil” in which AI is just another tool – but the real challenge is to keep an eye on all aspects of security and develop a comprehensive defence strategy.