Recent AI advancements have reshaped the security landscape, introducing complexity in how organizations secure their information. The challenge extends beyond safeguarding infrastructure and data integrity; we must now also worry about data as it interacts with third-party large language models.
We have to answer numerous questions, including:
🔶 Do these tools use our data to train their own models?
🔶 Are these tools storing our data, and if so, how secure is it?
🔶 Who has access to it?
🔶 How do we ensure customers only see their own data when interacting with Retrieval-Augmented Generation (RAG) products?
Ontra’s security process
Securing any technology starts with the same security fundamentals, from following industry best practices to securing infrastructure and data while maintaining the principle of least privilege. For additional security, we build in any special considerations as we go.
At Ontra, we bake security into every stage of the process and hold ourselves to leading industry standards through annual audits and tests.
Industry best practices
We maintain a SOC 2 Type 2 report and have acquired the ISO 27001:2022 certification. Annual penetration testing helps us confirm that we follow industry best practices throughout our infrastructure and applications. Ontra is actively working on obtaining ISO 42001 certification in 2025 to prove our AI use is as sound as the rest of our applications and services.
Infrastructure security
A key component of our strategy is ensuring that the foundation on which our web applications and AI tooling will run is secure. Our security team works closely with our infrastructure team to secure Ontra’s infrastructure. We review the infrastructure designs, scan the static code to identify potential vulnerabilities, and implement tooling to monitor for misconfigurations, potential threats, and drift.
Consistency across our platform
This same thorough process applies to every facet of Ontra’s offerings. Each new service, feature, and web application is put through the same security scrutiny. When a team begins a new service or application, Ontra’s Security team is brought in to review the design documents and provide an early security assessment. We then add security components to every phase, including scanning, threat assessments, monitoring, alerting, and annual pentesting.
Data security
Beyond the build and design process, we secure the data itself. Data is transmitted securely using industry-standard protocols. Upon arrival into Ontra’s application, data is scanned and validated but blocked from access until it comes back clean. This approach helps us avoid cross-site scripting and malicious code injection-type attacks while also preventing people from accessing or downloading potentially malicious files. The data is encrypted at rest, and access is locked down to follow the principle of least privilege.
Securing Ontra’s GenAI products
Access control
To ensure and maintain the security of Ontra’s development of GenAI products and LLM integrations, access to these systems is controlled by Ontra’s already robust access control methodologies, including zero-trust, role-based access controls, SSO, and multifactor authentication.
Prompt control
To maintain tight control over LLM interactions, we invest heavily in the governance of our prompts. Engineers collaborate closely with Ontra’s internal specialists to author and modify prompts. Prompts and prompt changes are verified through Ontra’s LLM Evaluation Platform and are subject to peer review. We monitor prompt performance carefully in production systems. Strict access controls and the principle of least privilege ensure that only authorized personnel can modify or access these prompts throughout the lifecycle.
Avoiding hallucinations
To further reduce risk, hallucinations are actively mitigated through a combination of output validation guardrails and human-in-the-loop processes. Output validation begins with defining and enforcing a specific data structure for all AI-generated outputs, reducing the likelihood of spurious or unreliable responses.
Guardrails also perform rigorous data validation checks and assess the output’s confidence level to ensure its appropriateness and accuracy. When necessary, a human reviewer is involved to verify responses and effectively address any edge cases.
Additional RAG security
For RAG products, we take additional measures to ensure data security, maintain user privacy, and prevent prompt injection attacks. Ontra ensures that all data access controls are maintained outside of the GenAI system. By delegating these controls to a secure and well-established external system, we prevent third-party LLMs from inadvertently bypassing access restrictions or introducing vulnerabilities.
The GenAI model operates in a sandboxed environment purely as a processing and inference layer, while permission enforcement is handled upstream by our robust access control frameworks. This separation of concerns prevents unauthorized data exposure and cross-account data leakage, even under attack scenarios. It also adds an essential layer of protection against malicious inputs or crafted prompts attempting to exploit system vulnerabilities.
By combining these layers of oversight, pre- and post-processing, and multi-level human and machine validation, we ensure that our GenAI products remain secure, consistent, and aligned with Ontra’s rigorous data privacy and security standards.
Zero-data retention
At Ontra, we place the utmost importance on protecting customer data and maintaining confidentiality while using best-in-class third-party LLM providers. All third-party LLM providers have been carefully selected and integrated into our systems to minimize risk.
Ontra has negotiated enterprise agreements with all of its third-party LLM providers that include a comprehensive zero-data retention policy. Under this policy, request and response bodies exchanged between the third-party LLM provider and Ontra are not persisted to any logging mechanism by the LLM provider and exist only in the memory of the LLM provider’s computing environment for the duration of serving the request.
This means that sensitive customer data shared with a third-party LLM provider in connection with the services are not stored within the LLM provider’s systems, keeping the risk of data exposure low.
A constant focus on security fundamentals
Using AI creates security challenges, but securing data is possible when working with AI as long as we continue to apply well-known security fundamentals and best practices to these new technologies. We have to embrace the idea of defense-in-depth to ensure that every layer has a measure of security as a failsafe. When using AI, securing the infrastructure is as important as securing the data and transmissions. Baking in security at every step is a must.