Intelligent CIO Africa Issue 96 | Page 29

EDITOR ’ S QUESTION

AI is not just disrupting how businesses operate ; it is reshaping the core of enterprise security strategies from defensive security postures through how legal departments treat AI content . Historically , information security policies were designed to address known threats , relying heavily on rules , signatures , and human-driven processes .

Generative AI , AI , and ML , Machine Learning are now crucial to achieving these goals and legal departments need to take note of how they work and the sensitive information that may be processed on behalf of the organisation . This is more than a data governance exercise , but rather a legal policy and process that governs privacy and the risks a business is willing to accept by using AI for cybersecurity .
To begin , AI is shifting enterprises toward preemptive security models , emphasizing risk prediction and anomaly detection , over a reactive response . Traditional policies that focused on static perimeter defences are being replaced by dynamic , behaviourbased frameworks that use vast amounts of internal data in order to determine if an attack vector is statistically possible even before it happens , that is where the legal problem potentially starts .
AI enabled platforms analyse vast amounts of data from multiple sources that may contain sensitive information , including endpoints , networks , databases , and even cloud environments , to detect subtle deviations that could indicate an imminent threat . In addition , AI can take this one step further by automating threat intelligence gathering from other sources that may
AI is shifting enterprises toward preemptive security models , emphasizing risk prediction over a reactive response .
represent a legal risk to the business if the data is inappropriately processed , stored , or correlated .
Any organisation adopting AI , in any form , for their cybersecurity should consider the following recommendations . Ensure all AI driven cybersecurity data being processed , stored , purged , and forwarded , meets the data privacy policies for the business especially based on regional laws .
AI models can easily be downloaded from the Internet for free . There are literally thousands of them , and some have already been compromised by bad actors . If your business uses public AI models , consider using continuous monitoring solutions to ensure they stay protected .
AI driven cybersecurity solutions consume vast amounts of data . This not only includes security data , but potentially logs from applications and databases . If these AI models are attacked or compromised , your
businesses intellectual property could be at risk too .
A legal and business policy should be established around the consumption of sensitive information , and just like recommendation number two , any inappropriate access or usage should be monitored for threats . p
MOREY HABER , CHIEF SECURITY
ADVISOR , BEYONDTRUST
www . intelligentcio . com INTELLIGENTCIO AFRICA 29