Trustworthy AI at Enterprise Scale: Securing Retrieval Augmented Generation Applications in AWS and the Hybrid Cloud
Authors: Micki Boland Technologist, and contributor Paul Ardoin Manager AWS Partner Cloud Security
Use Case Focus: Generative AI Retrieval Augmented Generation for In-House Cybersecurity Programs
Executive Summary
Generative AI is rapidly reshaping enterprise cybersecurity operations, enabling organizations to extract intelligence, not just signals, from vast volumes of security telemetry, threat intelligence, identity data, and operational documentation. Among the most impactful architectural patterns enabling this shift is Retrieval Augmented Generation (RAG), which combines large language models (LLMs) with proprietary knowledge sources to deliver context-aware, explainable, and actionable insights at machine speed.
Yet as enterprises deploy GenAI LLM RAG applications across hybrid cloud environments, blending on-premises data lakes with services such as Amazon Bedrock, Amazon SageMaker, and OpenSearch, they also introduce new and asymmetric risk. Prompt injection, data poisoning, model inversion, and agentic exploitation now target not just applications, but the intelligence layer itself. Authoritative frameworks such as NIST AI RMF, MITRE ATLAS, and the OWASP Top 10 for LLM Applications (2025) confirm that AI workloads demand security architectures beyond traditional DevSecOps.
This paper presents a practical, enterprise-grade path to securing GenAI LLM RAG applications, anchored in defense-in-depth, zero trust, and continuous observability. Using Check Point CloudGuard Network Security, CloudGuard WAF, and AI-native telemetry integrations (including Lakera Guard), organizations can protect hybrid RAG pipelines while enabling innovation at scale. The goal is not merely compliance or prevention, but operational resilience, trusted AI systems that elevate cybersecurity teams from alert triage to strategic risk leadership.
To read the full paper, please download here.