This collaborative project will deliver different tools and assessments, with the ultimate goal of reducing the utility that artificial intelligence has for cybercriminals.
Artificial intelligence (AI) is a technology that has quickly gained traction among users and businesses for its ability to automate tasks and enhance creativity. But it is also being used by cybercriminals to devise more effective attacks.
That is why Meta, which participates in the market with its Llama language model, has decided to launch Purple Llama. That is, a project to provide tools and assessments for developers to work responsibly with open generative AI models.
Purple Llama is supported by a large group of industry partners, including AI Alliance, AMD, Anyscale, AWS, Bain, CloudFlare, Databricks, Dell Technologies, Dropbox, Google Cloud, Hugging Face, IBM, Intel, Microsoft, MLCommons, Nvidia, Oracle, Orange, Scale AI and Together.AI.
Its name stems from concepts familiar in the cybersecurity world. “We believe that to truly mitigate the challenges presented by generative AI, we must adopt both attack (red team) and defense (blue team) postures,” they explain from Meta. This results in purple and a project with a “collaborative approach to assessing and mitigating potential risks.”
On the one hand, Meta will offer a series of standards-based benchmarks to quantify security risk, evaluate the frequency with which insecure code suggestions are made by AI and make it more difficult to generate cyberattacks. And, thus, reduce the usefulness of large language models for attackers.
On the other hand, Purple Llama introduces the Llama Guard model, with which it emphasizes its goal of helping developers not to produce potentially dangerous results. This solution has been trained on publicly available datasets.