EDITOR ’ S QUESTION
HARISH CHIB , VP EMERGING MARKETS , AFRICA AND MIDDLE EAST , SOPHOS
As the threat landscape evolves , driven by increasingly automated and sophisticated attacks , modern cybersecurity solutions must integrate advanced features to proactively address AI-driven risks . While large language models , LLMs have dominated the conversation around AI , their progression in 2025 is expected to be incremental rather than revolutionary .
However , multi-modal AI systems , capable of processing and analysing data from various sources such as text , images , and voice , will gain prominence . These systems are poised to play a critical role in defending against advanced phishing campaigns and social engineering scams , which continue to challenge traditional security measures .
accurate threat detection . Additionally , renewable energy technologies are likely to gain traction , helping mitigate the environmental impact of AI and other resource-intensive innovations .
Despite these technological strides , relying on technology alone is insufficient . Integrating Generative AI tools into traditional security systems , such as email gateways to detect phishing and spam , can bolster defences . Yet , the human factor remains pivotal .
Organisations must foster a culture of cyber resilience through robust security awareness programmes . Employees must be trained to critically evaluate communications and resist the impulse to trust potentially malicious messages .
The adoption of multi-modal AI systems will be complemented by incremental advancements across AI modalities . Enhanced hardware capabilities in processing , memory , and storage are expected to drive these improvements , enabling quicker and more
Banning Generative AI outright is neither practical nor effective . Where restrictions exist , employees often find workarounds driven by curiosity or necessity . Instead , organisations must educate their workforce about AI ’ s risks and provide clear , actionable guidelines for safe use .
MOREY HABER , CHIEF SECURITY ADVISOR , BEYONDTRUST
Always assume that an attack is more sophisticated than your defences . This implies that any detection and automation solution will have policies , rules , and AI detections that are inferior to the attack vectors the threat actor is instrumenting .
These are characteristics that are uniquely suited for AI to process large volumes of log information to identify anomalous patterns that a policy or rule engine is simply incapable of doing .
Rarely in cyber security history have defences successfully protected against new and novel attacks . That is how threat actors develop them ; to bypass existing defences and lead to a successful penetration of an environment .
This is not all doom and gloom . What organisations need to leverage AI defences for is clues , just like a detective , to find behavioural changes , inappropriate actions , unexpected processes , and access that does not follow the normal patterns of the business .
This is where we fight fire with fire . Only AI can operate fast enough to manage a novel attack that has never been seen before , simply based on the observations of undesirable behaviour .
For organisations , embedding AI in their defensive cyber security strategy , be aware , upfront , that AI probably will not know anything about the latest attack vectors . It will , however , know when something , or someone , is not operating in the best interests of the organisation , regardless of the attack vector .
28 INTELLIGENTCIO AFRICA www . intelligentcio . com