Tarnveer Singh on How CISOs Can Ensure Responsible AI Rollout

by CybrGPT
0 comment

Despite the significant security risks around AI tools, businesses are pressing ahead with deployment of these technologies.

Halting AI adoption is not an option. Instead, secure deployment must be the focus. This has been recognized by global regulators, with governments racing to publish guidance and laws around the safe and secure use of AI.

Tarnveer Singh, CISO at insurance firm The Exeter and Cyber Wisdom Ltd, has undertaken extensive research about AI security and privacy risks. In an interview with Infosecurity, he discusses practical steps security leaders can take to ensure AI tools are deployed with security baked in.

Singh also highlights the critical importance of psychology in cybersecurity. Both in understanding the behaviors of threat actors and enhancing the performance of security teams.

Infosecurity Magazine: What recommendations do you have for security leaders to ensure their organization uses AI tools securely?

Tarnveer Singh: Security leaders play a critical role in making sure AI tools are used in a safe, responsible manner. It’s essential we foster a culture that’s alert to security risks and a well-structured roadmap can help guide your organization in this effort.

First, clear governance and policies are key. You’ll want to define rules around acceptable use, outline what data can be shared, improve data quality and clarify roles whether that’s data scientists, data owners or stewards.

It’s equally important to establish guidelines for ethical AI use, address bias and ensure transparency, all while aligning with regulatory frameworks such as the General Data Protection Regulation (GDPR), the Health Insurance Portability and Accountability Act (HIPAA) and the EU AI Act.

Next, it’s worth identifying and managing any “shadow AI”, those unauthorized tools quietly cropping up in various departments. Audits, surveys and technical scans can help uncover these, while central oversight helps minimize risks like data leaks.

Employee education is another crucial pillar. Staff should be trained to understand AI-specific risks like prompt injection, data exposure and hallucinations. Leaders and system owners must appreciate these threats and their mitigations.

Developers and data scientists should stay sharp on failure modes and contribute to informed decision-making. Security teams ought to guide users through AI security risks as part of standard InfoSec training and support developers with secure coding techniques tailored to AI.

On the technical front, systems should be designed not just for performance and functionality, but with security baked in from the start. Consider your threat modelling and make sure you have appropriate technical controls, all while balancing other demands like usability, legal obligations and ethical concerns.

If you’re importing models or weights from outside sources, treat them as potentially hostile and apply isolation or sandboxing techniques to avoid serious risks like remote code execution.

Tools such as data loss prevention, redaction services, access control and anomaly detection can also be vital. Consider good practices like AI Observability and AI Detection & Response.

Don’t forget about vendors and third-party integrations either. Scrutinize their security posture, compliance standards and data-handling policies. Be thoughtful about supply chain risks whether you’re building in-house, fine-tuning existing models or using APIs.

Due diligence is a must, especially when working with external providers or libraries, to ensure safeguards like preventing unauthorized model loading are in place. Transparency around how models were trained and what boundaries exist is non-negotiable.

It’s also smart to take a phased approach to security. Begin by assessing your tools and data flows, then move on to developing sound policies through collaboration across teams, followed by rolling out automated controls and monitoring. And lastly, never stop maintaining oversight.

Make risk assessments a regular practice. Look at how a compromised AI system could impact not just the organization, but users and society at large. Reassess permissions often, clean up sensitive files, retrain models when needed, and stay alert to new threats like adversarial AI and deepfakes. I explore this topic further in my book, Artificial Intelligence and Ethics: A Field Guide for Stakeholders.

IM: How can organizations navigate the plethora of frameworks and regulations related to AI security while retaining a competitive edge with the use of AI?

TS: It’s fair to say that organizations face a regulatory whirlwind when it comes to AI. With frameworks like the UK’s AI Code of Practice, international standards such as ISO 42001, and heavyweight regulations like the EU’s AI Act, keeping pace can feel daunting.

That said, smart organizations can turn this challenge into an opportunity. The key is to approach these frameworks not as rigid obligations but as a chance to build stronger, more resilient AI systems. Many of these guidelines share core principles, think transparency, risk management and ethical use, which means you don’t have to start from scratch.

When organizations are transparent about how their AI models work, what data they’re trained on and how decisions are made, it builds trust not just with regulators but also with users and stakeholders. Transparency helps demystify the technology and encourages better accountability if something goes wrong.

“The key is to approach these frameworks not as rigid obligations but as a chance to build stronger, more resilient AI systems”

Risk management, meanwhile, ensures that you’re not just reacting to issues, but anticipating and mitigating them before they cause harm. In the context of AI, that means identifying vulnerabilities, like the potential for data leakage, model bias or adversarial attacks, and putting safeguards in place from the outset. It’s also about being ready to adapt as threats evolve, especially with how rapidly AI is developing.

Ethical use ties it all together. It’s about making decisions that consider the broader impact on individuals, communities, and society. From avoiding discriminatory outcomes to respecting privacy and consent, ethics helps steer AI in a direction that’s sustainable and human centered.

When these principles are embedded into your AI strategy, you create a foundation for systems that are not just secure but also resilient, trusted and fit for the future. For more complex organizations, developing a unified governance model can help alignment across frameworks, so businesses can reduce duplication and stay nimble.

It also helps to embed security and ethics into the design phase, so you’re future proofing your systems while ticking compliance boxes. Keeping cross-functional teams connected including legal, security and data science to makes sure nothing gets missed and everyone’s pulling in the same direction.

If you keep an eye on changes in regulation while anchoring your strategy in sound governance, there’s no reason compliance can’t go hand-in-hand with innovation. In fact, done right, it can be a competitive advantage.

IM: Have you observed an evolution in the motivations of cybercriminals over recent years? To what extent has this impacted the way organizations are targeted?

TS: There’s been a shift in the motivations driving cyber threat actors over the past few years. In the early days, many attacks were driven by curiosity or the desire to show off technical skills. But now, financial gain is front and center, with ransomware and data theft becoming go-to tactics for criminal groups.

More recently, hybrid threat actors have emerged, blending motives like financial crime, influence operations and sabotage all in one campaign. This evolution means organizations can no longer rely on a one-size-fits-all defense.

Attackers are more strategic, more persistent and often better resourced. Targets have expanded beyond just big corporations and governments, mid-sized firms, supply chains and even charities are now fair game.

To stay ahead, organizations need to understand not just the technical side of threats, but the intent behind them. That insight helps shape smarter defenses and more agile response plans.

Scattered Spider are a fascinating, and frankly worrying, example of how cyber threat actors have evolved in both tactics and motivation. What sets them apart isn’t just their technical skill, but their mastery of social engineering and their hunger for notoriety.

While financial gain is clearly a major driver their motivations go deeper than just money.

Many members of Scattered Spider are young, English-speaking individuals, some reportedly just teenagers, operating out of the UK and US. They’re part of loosely affiliated online communities like “The Com,” where reputation and peer recognition carry serious weight.

That means their attacks aren’t just about profit, they’re also about proving themselves, scoring wins and gaining status among fellow hackers. Their approach is highly strategic. They often impersonate employees to trick IT help desks into granting access, bypassing even robust security measures like multi-factor authentication.

“Their attacks aren’t just about profit, they’re also about proving themselves, scoring wins and gaining status among fellow hackers”

Once inside, they don’t rush. They explore internal systems, escalate privileges and exfiltrate sensitive data before deploying ransomware. It’s a blend of patience, manipulation and technical know-how that makes them especially dangerous.

If they succeed in breaching one company in a sector, they’ll often replicate the same tactics across similar organizations. Recently, they’ve shifted focus from retail and telecoms to the aviation industry, exploiting legacy systems and third-party vendors to gain access.

In short, Scattered Spider isn’t simply a criminal group. They’re a reflection of how cybercrime is becoming more decentralized, reputation-driven, and psychologically savvy. I explore this topic further in my new book, The Psychology of Cybersecurity: Hacking and the Human Mind.

IM: How can psychological techniques be used to enhance the performance of security teams?

TS: Psychological techniques can make a real difference in how security teams perform and grow professionally. It’s not just about technical skills anymore, understanding human behavior, motivation and decision-making is becoming central to building resilient, high-performing teams.

For instance, applying principles like Nudge Theory can help encourage secure behaviors without relying on fear or punishment. Techniques drawn from cognitive psychology can improve how teams respond under pressure, helping them stay focused and make better decisions during incidents.

Emotional intelligence and empathy also play a role. Leaders who understand what drives their team members can foster a more collaborative and supportive environment, which is vital in high-stress situations. Training programs that incorporate gamification, simulation, and storytelling tend to be more engaging and memorable, leading to better retention and real-world application.

And let’s not forget the importance of psychological safety. When team members feel safe to speak up, admit mistakes, and share ideas, it strengthens the overall security posture. So, while firewalls and encryption are essential, it’s the human element, shaped by psychology, that often makes the biggest impact.

IM: What are your biggest concerns in cybersecurity today?

TS: One of the biggest concerns in cybersecurity today is the sheer speed at which threats are evolving, especially with the rise of AI-powered attacks. We’re seeing everything from deepfake scams and voice cloning to automated phishing campaigns that are far more convincing than anything we’ve dealt with before. It’s not just the sophistication of these attacks that’s worrying, but how accessible the tools have become to less experienced threat actors.

IM: What are your biggest successes in cybersecurity today?

TS: We’re seeing a cultural change. Security teams are beginning to see the importance of human security and psychology as much as the technical side. Cybersecurity’s no longer just the IT crowd’s problem—it’s gone boardroom-level.

Executives actually care, which means budgets are healthier and awareness is up. Plus, there’s some great public-private collaborations happening, with threat-sharing across sectors making everyone stronger.

Cybersecurity firms are increasingly leveraging AI and machine learning to detect and respond to threats more effectively. These technologies enable them to analyze vast amounts of data to identify anomalies and potential threats that might be missed by traditional security systems.

Effective incident response relies on the ability to quickly contain threats and prevent them from spreading. Tools like EDR and network segmentation are helping to isolate affected systems and limit the damage caused by an attack.

Oh, and not to forget ransomware responses. These used to cripple organizations for weeks. Now, with better backup strategies, smarter incident response and coordinated takedowns of ransomware gangs, we’re now seeing gradual improvements with quicker bounce-backs and fewer payouts. That’s a major win.

IM: If you could give one piece of advice to fellow CISOs, what would it be?

TS: If I were sliding into a room full of CISOs with one pearl of wisdom in my back pocket, I’d say this: “Make relationships your strongest security layer.”

Technology is vital, but without trust, communication and collaboration across the business, you’re just fighting fires alone.

Build those bridges early. Chat with legal, lean on HR, educate the executives and get finance on your side. Make cybersecurity everyone’s business and not just yours tucked away in a dark office with blinking monitors.

That kind of buy-in turns cybersecurity from a department into a culture, and when that happens, you’re not just defending, you’re leading.

Source link

You may also like

Leave a Comment

Stay informed with the latest in cybersecurity news. Explore updates on malware, ransomware, data breaches, and online threats. Your trusted source for digital safety and cyber defense insights.

BuyBitcoinFiveMinute

Subscribe my Newsletter for new blog posts, tips & new photos. Let’s stay updated!

© 2025 cybrgpt.com – All rights reserved.