As generative AI innovation continues at a breakneck pace, concerns around security and risk have become increasingly prominent. Some lawmakers have requested new rules and regulations for AI tools, while some technology and business leaders have suggested a pause on training of AI systems to assess their safety.
Generative AI isn’t going away
The reality is that generative AI development is not stopping. Organizations need to act now to formulate an enterprise-wide strategy for AI trust, risk and security management (AI TRiSM). There is a pressing need for a new class of AI TRiSM tools to manage data and process flows between users and companies who host generative AI foundation models.
There are currently no off-the-shelf tools in the market that give users systematic privacy assurances or effective content filtering of their engagements with these models, for filtering out things like factual errors, hallucinations, copyrighted materials or confidential information.
AI developers must urgently work with policymakers, including new regulatory authorities that may emerge, to establish policies and practices for generative AI oversight and risk management.
Significant risks for enterprises
Generative AI raises a number of new risks. “Hallucinations” and fabrications, including factual errors, are some of the most pervasive problems already emerging with generative AI chatbot solutions. Training data can lead to biased, off-base or wrong responses, but these can be difficult to spot, particularly as solutions are increasingly believable and relied upon.
Deepfakes are another rapidly growing problem, when generative AI is used for content creation with malicious intent. These fake images, videos and voice recordings have been used to attack celebrities and politicians, to create and spread misleading information, and even to create fake accounts or take over and break into existing legitimate accounts.
In a recent example, an AI-generated image of Pope Francis wearing a fashionable white puffer jacket went viral on social media. While this example was seemingly innocuous, it provided a glimpse into a future where deepfakes create significant reputational, counterfeit, fraud and political risks for individuals, organizations and governments.
Data privacy is also of concern, especially given employees can easily expose sensitive and proprietary enterprise data when interacting with generative AI chatbot solutions. These applications may indefinitely store information captured through user inputs, and even use information to train other models — further compromising confidentiality. Such information could also fall into the wrong hands in the event of a security breach.
Then there’s copyright issues. Generative AI chatbots are trained on a large amount of internet data that may include copyrighted material. As a result, some outputs may violate copyright or intellectual property (IP) protections. Without source references or transparency into how outputs are generated, the only way to mitigate this risk is for users to scrutinize outputs to ensure they don’t infringe on copyright or IP rights.
Finally, cybersecurity concerns pose significant risk. In addition to more advanced social engineering and phishing threats, attackers could use these tools for easier malicious code generation.
Vendors who offer generative AI foundation models assure customers they train their models to reject malicious cybersecurity requests; however, they don’t provide users with the tools to effectively audit all the security controls in place. They also put a lot of emphasis on “red teaming” approaches, which require users put their full trust in the vendors’ abilities to execute on security objectives.
Actions to manage generative AI
It’s important to note that there are two general approaches to leveraging ChatGPT and similar applications. Out-of-the-box model usage leverages these services as-is, with no direct customization. A prompt engineering approach uses tools to create, tune and evaluate prompt inputs and outputs.
For out-of-the-box usage, organizations must implement manual reviews of all model output to detect incorrect, misinformed or biased results. Establish a governance and compliance framework for enterprise use of these solutions, including clear policies that prohibit employees from asking questions that expose sensitive organizational or personal data.
Monitor unsanctioned uses of ChatGPT and similar solutions with existing security controls and dashboards to catch policy violations. For example, firewalls can block enterprise user access, security information and event management systems can monitor event logs for violations, and secure web gateways can monitor disallowed API calls.
For prompt engineering usage, all of these risk mitigation measures apply. Additionally, steps should be taken to protect internal and other sensitive data used to engineer prompts on third-party infrastructure. Create and store engineered prompts as immutable assets.
These assets can represent vetted engineered prompts that can be safely used. They can also represent a corpus of fine-tuned and highly developed prompts that can be more easily reused, shared or sold.
By Avivah Litan, VP Analyst at Gartner