INTELLIGENT BRANDS // Enterprise Security
Enterprises experience surge in proprietary source code sharing within GenAI apps finds Netskope
Netskope , a provider in Secure Access Service Edge , published new research showing that regulated data , data that organisations have a legal duty to protect makes up more than a third of the sensitive data being shared with generative AI , GenAI applications , presenting a potential risk to businesses of costly data breaches .
The new Netskope Threat Labs research reveals three-quarters of businesses surveyed now completely block at least one GenAI app , which reflects the desire by enterprise technology leaders to limit the risk of sensitive data exfiltration .
However , with fewer than half of organisations applying data-centric controls to prevent sensitive information from being shared in input inquiries , most are behind in adopting the advanced data loss prevention , DLP solutions needed to safely enable GenAI .
Using global data sets , the researchers found that 96 % of businesses are now using GenAI – a number that has tripled over the past 12 months . On average , enterprises now use nearly 10 GenAI apps , up from three last year , with the top 1 % adopters now using an average of 80 apps , up significantly from 14 .
With the increased use , enterprises have experienced a surge in proprietary source code sharing within GenAI apps , accounting for 46 % of all documented data policy violations . These shifting dynamics complicate how enterprises control risk , prompting the need for a more robust DLP effort .
There are positive signs of proactive risk management in the nuance of security and data loss controls organisations are applying : for example , 65 % of enterprises now implement real-time user coaching to help guide user interactions with GenAI apps .
According to the research , effective user coaching has played a crucial role in mitigating data risks , prompting 57 % of users to alter their actions after receiving coaching alerts .
“ Securing GenAI needs further investment and greater attention as its use permeates through enterprises with no signs that it will slow down soon ,” said James Robinson , Chief Information Security Officer , Netskope .
The report also finds that :
• ChatGPT remains the most popular app , with more than 80 % of enterprises using it
• Microsoft Copilot showed the most dramatic growth in use since its launch in January 2024 at 57 %
• 19 % of organisations have imposed a blanket ban on GitHub CoPilot
Netskope recommends enterprises review , adapt and tailor their risk frameworks specifically to AI or GenAI using efforts like the NIST AI Risk Management Framework .
Specific tactical steps to address risk from GenAI include :
• Begin by assessing your existing uses of AI and machine learning , data pipelines , and GenAI applications . Identify vulnerabilities and gaps in security controls .
• Establish fundamental security measures , such as access controls , authentication mechanisms , and encryption .
• Consider threat modelling , anomaly detection , continuous monitoring , and behavioural detection to identify suspicious data movements across cloud environments to GenAI apps that deviate from normal user patterns .
• Regularly evaluate the effectiveness of your security measures . Adapt and refine them based on real-world experiences and emerging threats . p
60 INTELLIGENTCIO AFRICA www . intelligentcio . com