Healthcare workers regularly upload sensitive data to GenAI, cloud accounts

by CybrGPT
0 comment

Healthcare organizations are facing a growing data security challenge from within, according to a new report from Netskope Threat Labs. The analysis reveals that employees in the sector are frequently attempting to upload sensitive information, including potentially protected health data, to unauthorized websites and cloud services. Among the most common destinations are AI tools like ChatGPT and Gemini.

Healthcare GenAI data policy violations

Over the past 12 months, 81% of all data policy violations in healthcare organizations involved regulated healthcare data. This includes information protected by local, national, or international laws, such as sensitive medical and clinical records. The remaining 19% of violations involved other sensitive assets such as passwords and keys, source code, or intellectual property. Many of these incidents stemmed from employees uploading data to personal cloud storage services like Microsoft OneDrive or Google Drive.

Generative AI is widely embedded in healthcare environments, with 88% of organizations reporting usage. 44% of data policy violations involving generative AI included regulated healthcare data, while others involved source code (29%), intellectual property (25%), and passwords or keys (2%). The risk is compounded by the prevalence of applications that either use personal data for model training (present in 96% of organizations) or embed generative AI features (98%).

A key issue is the use of personal GenAI accounts in the workplace. More than two-thirds of healthcare employees using these tools send sensitive data to accounts outside organizational control. This undermines visibility for security teams and limits their ability to detect or prevent potential data leaks in real time.

Data protection guardrails

Deploying organisation-approved GenAI applications to centralise GenAI usage in applications approved, monitored, and secured by the organisation, and reduce the use of personal accounts and “shadow AI”. The use of personal GenAI accounts by healthcare workers, while still high, has already declined from 87% to 71% over the past year, as organisations increasingly shift towards organisation-approved GenAI solutions.

Deploying Data Loss Prevention (DLP) policies to monitor and control access to GenAI applications and define the type of data that can be shared with them, provides an added layer of security should workers attempt risky actions. The proportion of healthcare organisations deploying DLP policies for GenAI has increased from 31% to 54% over the past year.

Deploying real-time user coaching, a tool alerting employees if they are taking risky actions. For example, if a healthcare worker attempts to upload a file into ChatGPT that includes patient names, a prompt will ask the user if they want to proceed. A separate report shows that a large majority of employees (73%) across all industries do not proceed when presented with coaching prompts.

Source link

You may also like

Leave a Comment

Stay informed with the latest in cybersecurity news. Explore updates on malware, ransomware, data breaches, and online threats. Your trusted source for digital safety and cyber defense insights.

BuyBitcoinFiveMinute

Subscribe my Newsletter for new blog posts, tips & new photos. Let’s stay updated!

© 2025 cybrgpt.com – All rights reserved.