Enterprises want to mitigate associated risks of generative AI (GenAI).
For the survey, Gartner polled 150 IT and information security executives at companies in its peer community survey in early April 2023 where GenAI or generative models are in use, planned for use or being explored.
“IT, security and risk management leaders need to consider supporting an enterprise-wide AI TRiSM (trust, risk and security management) strategy in addition to implementing security tools,” said Avivah Litan, an analyst at Gartner. “AI TRiSM manages data and process flows between users and enterprises hosting generative AI foundation models and must be continuous to protect an organization.”
IT is responsible for GenAI security
While 93 percent of IT and security executives surveyed said they are at least somewhat involved in their company’s GenAI security and risk management efforts, only a quarter said they are responsible.
Of the respondents who do not have responsibility for GenAI security and/or risk management, 44 percent indicated that the ultimate responsibility for GenAI security lies with IT. For 20 percent of respondents, the responsibility lies with their organization’s governance, risk and compliance departments.
The risks associated with GenAI are significant, ongoing and will continue to evolve. Survey respondents indicated that undesirable outcomes and unsafe code are among the risks they are most concerned about when using GenAI. Fifty-seven percent of respondents are concerned about secrets being leaked in AI-generated code, and 58 percent are concerned about incorrect or biased results.
“Companies that don’t have a handle on AI risk will see their models not work as intended and, in the worst case scenario, cause human or property damage,” Litan said. “This leads to safety failures, financial and reputational losses, and harm to individuals through incorrect, manipulated, unethical or biased results. AI missteps can also cause companies to make poor business decisions.”