Securing shadow AI and misuse of commercial LLMs

Published on May 20, 2024

Securing shadow AI and misuse of commercial LLMs

Like it or not, your team is likely using public LLMs like ChatGPT without proper security precautions. In fact, a staggering 55% of corporate employees say they’ve used unapproved generative AI tools at the workplace, and that number is only growing. 

We recently published a report on why prompt injection is the biggest new threat vector associated with LLM adoption. But while this is an up-and-coming threat, there is another piece of LLM security that is top-of-mind for security leaders. They might say, “We don’t use any LLMs in our tech stack, so we’re not vulnerable to a prompt injection attack.” While that’s true, it doesn’t mean that your company isn’t at risk of leaking data via new LLM-related vectors.

Because the reality is that your organization is using LLMs—just not ones that you’re managing yourself, and often without oversight. Custom-built enterprise LLM applications may be better siloed and secured, but employees will still turn to public LLMs like ChatGPT, Claude, Gemini, and others because of higher awareness and, in some cases, ease of use.

An illustration showing usage among employees of prohibited generative AI tools
Via Salesforce

The security community has struggled with “shadow IT”—the use of software, hardware, or applications without the knowledge of centralized IT management—for more than 15 years, with very limited success. The vast sprawl of productivity tools leveraging black-box LLMs in the back end in recent years has led to a new phenomenon: the use of AI applications or services without the knowledge of centralized security / IT / data teams. Some are calling this phenomenon “shadow AI.”

After all, employees will continue to be motivated to use GenAI to find more efficient workflows—they want to be great at their jobs! But they may open a Pandora’s box of risk in the process. This isn’t an “up-and-coming” problem—this is a giant threat vector already. 

Approaches to safe public LLM usage

Generative AI might be the defining technology advancement of this era, but organizations have been slow to put policies in place to secure it. A recent survey indicated only 26% of organizations report having a generative AI security policy. Those few that have established policies are not inherently safe, either—many security leaders have had a knee-jerk response to fears around shadow AI and were quick to block employee access.

Unfortunately, simply blocking company networks from loading public URLs like ChatGPT is insufficient. The Cambrian explosion of LLM-driven applications has spawned more than 12,000 generative AI tools, many of which are thin wrappers around the same well-known public LLMs. 

By denying access to well-known and more transparently operated market leaders like ChatGPT or Claude, security leaders may push employees underground toward less centralized, less well-documented LLM solutions that could pose even bigger risks. Trying to identify all these instances of shadow AI poses perhaps an even more daunting challenge than finding and securing SaaS AI apps, a pain point already top-of-mind for many CISOs.

A screenshot of a SignalFire blog post about why we invested in Horizon3

Solving the shadow AI challenge will require the best ideas from other verticals of cybersecurity, as well as new innovations. Some fundamental elements that a well-crafted generative AI security posture will include are:

  • LLM governance and real-time detection: Flagging user activity that seems to indicate LLM usage and restricting transfer of sensitive data
  • Continued push to least privilege: Reducing employee over-access to data to minimize what they could leak
  • Risk awareness: Educating employees on the latest risks posed by unapproved LLM usage
  • Secure, central AI tools: Promoting awareness and access to thoroughly-reviewed AI products with documented data policies and enterprise admin access controls

#1: Swiper, no swiping: Keeping PII and crown jewels out of LLMs in real time

Using proprietary company data as part of a prompt in a public LLM like ChatGPT can lead to it being repeated in the model’s output, not just for you, but for others, as it may be leveraged as training data. The last thing you need is a well-meaning employee including sensitive company IP in a ChatGPT prompt, just for one of your competitors to prompt for similar data in the future and gain a competitive advantage over you. 

Additionally, inclusion in a prompt of any personal identifiable information (PII) or other customer data that may fall under data privacy regulations opens up a whirlwind of regulatory concerns that can create serious headaches for organizations of all sizes.

As we discussed in our last post on prompt injection, the prompt you input gets read by the LLM effectively as a string of integer “tokens”—so it’s quite difficult for a model itself to identify information that may be sensitive to you or your organization.

A screenshot of a blog post by SignalFire about prompt injection

Thus, in order to stop “crown jewel” IP, trade secrets, or customer data from making their way into training data sets, we need to catch their usage before they get input into public LLMs themselves. This is where new AI-era functionality might need to develop. The vast majority of those doing ad hoc tasks using LLMs like ChatGPT are doing it in web browsers, giving us a few options of intervention points to consider. Most notable are secure enterprise browsers and browser extensions

Enterprise browsers like Talon and Island aren’t new, but interest in them remains strong (as evidenced by Palo Alto Networks acquiring Talon for $625 million last year). The premise is simple: rather than attempting to secure all traffic at the network level by either keeping employees in-office or mandating employees use virtual desktop instances when working remotely, start securing internet access at the application layer, with a browser built specifically with enterprise security in mind. Some companies, like LayerX and Aim Security, believe that overhauling the full browser experience for a company’s entire workforce is a heavy lift, and have built a browser extension product—allowing users to continue using browsers such as Chrome, Edge, and Safari, but adding on security features.

So these companies form the infrastructure to better understand your users’ traffic, and when they might be interfacing with LLMs, but how can we tell in real time what data is sensitive or proprietary? This is a classic data loss prevention (DLP) problem, but has new depth now: the fear of inadequately classified sensitive data is slowing AI adoption.

Enter startups like Harmonic and Credal, which can scan inputs to LLM interfaces in real time, connect to your company’s datastores to identify what is proprietary company information, perform human-level classification, and create a real-time data protection layer to avoid leakage. In some cases, this may entail allowing some AI apps with well-documented privacy policies to have access to more information than those that are unknown.

The companies focused on this problem broadly fall into the category of AI data security, ensuring that what’s being said to ChatGPT has been reviewed for security purposes. This needs to happen at runtime and relies on the same principles as data security posture management (DSPM), which SignalFire saw firsthand through our investment in Dig Security, acquired by Palo Alto Networks.

There are many approaches to exactly how to triage access and data exchange, but the ideal state fundamentally comes down to two principles: full visibility (hence the browser and browser extension) and a firm grasp on what is and is not sensitive (hence the next-generation AI governance/integrity platforms). Displayed above is a selection of companies working on these two security vectors (if your startup is missing, reach out to us!).

#2: 50 years later, we’re still struggling with “least privilege”

“Damn it, you're on a need-to-know basis, and you don't need to know!” That’s a nearly 30-year-old quote from the movie The Rock, but the concept goes back to 1974. MIT security legend Jerome Saltzer introduced least privilege as the idea that every user should only have access to permissions to read/write on the resources that they need to do their jobs and no more. 

In a perfect world, the AI governance and integrity platforms (from #1 above) would ensure that sensitive data that users have access to won’t make its way into LLM applications. But we don’t live in a perfect world—so the next best step for security leaders is to reduce the surface area of data that can be leaked.

Actually putting this into practice is one of the largest pain points for enterprises, and countless security companies have sold the dream of least privilege to CISOs over the years. The conflict is structural: the security team is incentivized to enforce least privilege, while developers, data scientists, and business users are motivated to preemptively overpermission themselves to get their jobs done as efficiently as possible. No one wants to wait around for approvals while trying to ship or sell. Access reviews, where security teams audit these permission sets, is one of the most universally despised processes in tech today.

A screenshot of a SignalFire blog post about Palo Alto Networks acquiring Dig

We’ve spent 50 years trying to get this right, and still we have a long way to go: StrongDM’s data shows that one in three employees has access to systems they haven’t used in 90 days, a telltale sign of overpermissioning. That unnecessary access comes with lots of downsides. Malicious insiders could misuse the data, and compromised credentials could lead to exfiltration by an attacker. 

This is important because GenAI introduces yet another risk: increasing the surface area of data that employees with good intentions can leak into LLMs in the name of productivity. Using proprietary data as part of your prompt in a public LLM like ChatGPT can lead to it being repeated in the model’s output, not just for you but for others, as this data may be leveraged as training data. The last thing you need is a well-meaning employee including sensitive company IP in a ChatGPT prompt, just for one of your competitors to prompt for similar data in the future and make competitive gains on you. Additionally, inclusion in a prompt of any PII or other customer data that may fall under data privacy regulations opens up a whirlwind of regulatory concerns that can create serious headaches for organizations of all sizes.

With this new AI risk layered onto an already burning problem, we expect LLM security policies to double down on the need for least privilege. The challenge here isn’t new, but the problem is bigger than ever—and security leaders need to align with their teams that the Great GenAI Shift means least privilege is more important than ever.

#3: Staying informed with more than just phishing training

In the same way organizations previously taught basic security literacy, including how to spot phishing scams and keep passwords safe, they must now expand to teach AI literacy. The average enterprise user can’t be expected to understand the intricacies of data leakage and model training, so it’s on security leaders to empower the rank-and-file with clear and tangible explanations of their role in keeping the company’s crown jewels secure.

Education about why LLMs pose risks may be more effective than an opaque ban. It’s also helpful to provide examples of safe and timely ways to use public LLMs, which people respond to better than just being told “don’t use them.” We expect this kind of education to slowly make its way into corporate training modules, but you’ll need to proactively inject it into your company culture until then.

A simple iconographic illustration of a brain and a lock

As an aside, we’ll write more about spear phishing at scale in the future, but given the superpowers that LLMs give to cyberattackers, security training and awareness more generally stand to be hugely important in the next generation of cyber defense.

#4: Legalize (and regulate) your AI usage

The most effective solution to shadow AI might just be to give the people what they want. Many enterprises are weighing the benefits of developing an internal suite of AI-driven productivity tools for employees that are specific to the use cases of their industry or their function. Organizations can choose to develop their models by leveraging open-source LLMs and their own managed servers to keep all their data within their own four walls. This, of course, will be taxing on engineering resources, but the wider benefit of avoiding employees needing to go into the “wild west” of public LLMs for their mission-critical AI functions seems certainly worth it.

Another route is to develop commercial agreements with one of the large players. Their platforms run on SSO and with the right enterprise contracting, these GenAI platforms can become another SaaS app that’s managed directly, with data security policies built into account privileges. This turns the problem from an unknown set of risks to a known and bounded—and somewhat controllable—risk plane.

Where we go from here

The three major areas we believe to be poised for incredible growth in the wake of GenAI usage by employees are AI governance and visibility platforms, browser security players, and a category we’ve been covering for years, identity and access management suites.

If you’re building in any of these spaces, we’d love to speak with you. We hosted an AI Security event at RSA with luminaries in the space and a number of F500 CISOS, and we continue to discuss AI security on an ongoing basis with security leaders at large enterprises all around the world. 

Building a startup relevant to this theme? Looking to build an LLM security policy for your organization? Disagree wildly with our takes? Reach out to me directly at t@signalfire.com and let’s chat.

*Portfolio company founders listed above have not received any compensation for this feedback and did not invest in a SignalFire fund. Please refer to our disclosures page for additional disclosures.

Related posts

Auxilius replaces Excel to automate biopharma clinical trial finances
Investment
June 11, 2024

Auxilius replaces Excel to automate biopharma clinical trial finances

We’ve earmarked $50M for the SignalFire AI Lab to provide the resources, capital, and credibility to help tomorrow’s AI leaders today.
Securing shadow AI and misuse of commercial LLMs
Investment
May 20, 2024

Securing shadow AI and misuse of commercial LLMs

We’ve earmarked $50M for the SignalFire AI Lab to provide the resources, capital, and credibility to help tomorrow’s AI leaders today.
Copilots aren’t the future of SaaS: We’re shifting from tools to AI solutions
Investment
March 19, 2024

Copilots aren’t the future of SaaS: We’re shifting from tools to AI solutions

We’ve earmarked $50M for the SignalFire AI Lab to provide the resources, capital, and credibility to help tomorrow’s AI leaders today.