Red Hat Summit 2024: Summary of Key Announcements

Red Hat Summit 2024 provided the stage for the unveiling of the company’s latest innovations and collaborations.

Red Hat unveiled a series of significant innovations in artificial intelligence (AI) and policy automation at Red Hat Summit 2024 in Denver, Colorado (USA). The introduction of “Policy as Code” in its Red Hat Ansible Automation Platform represents a key advancement in addressing the complexities inherent in AI at scale by providing governance and consistency in the hybrid cloud.

In addition, Red Hat has expanded its Lightspeed portfolio to include generative AI, promoting the efficiency and accessibility of IT tasks. Strategic collaborations with Intel and AMD aim to drive enterprise use of AI, offering end-to-end solutions that span from the data centre to the edge.

Finally, the announcement of Podman AI Lab demonstrates Red Hat’s commitment to democratising AI by providing developers with an intuitive tool to build, test and run AI applications in container environments. These initiatives reflect Red Hat’s continued drive towards AI innovation and simplifying AI adoption in diverse enterprise environments.

Policy as Code

Red Hat has introduced “Policy as Code” as part of its Red Hat Ansible Automation Platform, with the goal of addressing the complexities of artificial intelligence at scale. This automated capability seeks to drive targeted policy automation in the hybrid cloud, bringing governance and greater consistency to both the AI explosion and traditional IT operations. Implementing policy-as-code enables critical compliance standards to be enforced as AI-generated applications and systems emerge, making automation a strategic component in the evolution of AI.

The challenge of adopting and scaling AI workloads alongside hybrid cloud creates more complexity and sprawl. Policy as code helps bring order to this sprawl, both current and potential, by enforcing critical compliance standards as AI-generated applications and systems emerge, making automation a real strategic component in the evolution of AI.

Red Hat Lightspeed cloud portfolio

Red Hat also announced the expansion of Red Hat Lightspeed to include generative AI across its hybrid cloud portfolio. This initiative aims to make IT tasks more efficient and accessible to all skill levels. Red Hat Lightspeed, integrated into platforms such as Red Hat OpenShift and Red Hat Enterprise Linux, leverages AI to improve the productivity and efficiency of teams using these platforms, helping to bridge skills gaps in the industry and simplifying application lifecycle management.

In addition, Red Hat Lightspeed will apply generative AI in Red Hat OpenShift to simplify application lifecycle management in OpenShift clusters, enabling users to build skills faster and experts to use the platform more efficiently. This includes features such as automatic scaling suggestions and tuning recommendations based on usage patterns. In addition, Red Hat Enterprise Linux Lightspeed will help simplify the deployment, management and maintenance of Linux environments, leveraging Red Hat’s decades of experience in enterprise Linux and using generative AI to help customers respond more quickly to common problems and questions.

Intel Processors to the Edge

Red Hat and Intel are looking to power enterprise use of artificial intelligence (AI) through Red Hat OpenShift AI. This alliance aims to deliver end-to-end AI solutions that leverage Intel’s AI products, including Intel Gaudi accelerators, Intel Xeon and Core processors, and Intel Arc GPUs. The resulting hybrid cloud infrastructure will enable organisations to develop, train and deploy AI models in a seamless and scalable manner from the data centre to the edge, ensuring interoperability and workload portability.

Red Hat OpenShift AI provides a solid foundation for AI applications in any environment, from the cloud to the edge, optimising support for Intel AI products. This collaboration also involves certifying Intel hardware solutions on Red Hat OpenShift AI, ensuring interoperability and enabling end-to-end AI capabilities. Through the open source approach, Red Hat and Intel aim to accelerate the deployment of AI solutions on any platform, resulting in faster time to market and the creation of cost-effective AI building blocks at scale.

Collaboration with AMD GPUs

Red Hat and AMD will collaborate to expand customer choice in deploying artificial intelligence (AI) workloads. This partnership is focused on enabling AMD GPU operators in Red Hat OpenShift AI to provide the processing power needed for AI workloads in the hybrid cloud, making it easier for organisations to adopt AI. By working together, Red Hat and AMD aim to foster innovation and cultivate an environment where AI solutions can be efficiently tailored to unique business needs.

AMD GPUs in Red Hat OpenShift AI enable customers to access, deploy and leverage a validated GPU Operator, streamlining AI workflows and bridging gaps in GPU supply chains. This collaboration combines AMD’s AI hardware expertise with Red Hat’s open source and AI software expertise, providing customers with validated tools, testing and enhancements for efficient AI deployment. In addition, this collaboration is expected to drive AI innovation by offering customers a greater choice of GPU resources in Red Hat’s open hybrid cloud environments.

NVIDIA NIM Microservices Integration

While competing with AMD, Red Hat announced the upcoming integration of NVIDIA NIM microservices into Red Hat OpenShift AI, an open source AI/ML hybrid cloud platform. This collaboration will enable users to combine AI models trained using Red Hat OpenShift AI with NVIDIA NIM microservices to develop generative AI applications on a single trusted MLOps platform. This will facilitate the integration and management of multiple AI deployments, providing scaling, integrated monitoring and enterprise security for a smooth transition from prototyping to production.

Integration will provide a simpler way to deploy NVIDIA NIM across common workflows, ensuring greater consistency and easier management. It will also enable integrated scaling and monitoring of NVIDIA NIM deployments in coordination with other AI models in hybrid cloud environments, as well as enterprise-grade security, support and stability to ensure a smooth transition to production. NVIDIA NIM microservices are designed to accelerate enterprise deployments of generative AI by providing scalable, integrated capabilities on customer infrastructure or in the cloud using industry-standard application programming interfaces.

Podman AI Lab

Finally, Red Hat announced Podman AI Lab, an extension to Podman Desktop that enables developers to build, test and run Generative AI-powered applications in containers, using an intuitive graphical interface on their local workstation. This contributes to the democratisation of GenAI, giving developers the benefits of convenience, simplicity and cost efficiency of their local development experience while maintaining ownership and control over sensitive data. Podman AI Lab provides a familiar and easy-to-use environment for developers to apply AI models to their code and workflows in a safe and more secure manner, without the need for costly investments in infrastructure or extensive AI expertise.

In an environment where the use of AI-enabled applications is on the rise, Podman AI Lab presents a familiar and easy-to-use tool and gaming environment for developers to apply AI models to code and workflows in a safe and more secure manner, without requiring costly investments in infrastructure or extensive AI expertise. Podman AI Lab includes a recipe catalogue with sample applications that allow developers to get started with some of the most common use cases for large language models (LLMs), such as chatbots, text summarisers, code generators, object detection and audio-to-text transcription. These examples provide an entry point for developers to review the source code, see how the application is built, and learn best practices for integrating their code with an AI model.