Commitment to Privacy, Compliance and Security in AI
OORT Labs maintains formal privacy, compliance and security policies across every AI capability in OORT Flows, meeting regulations in Brazil (LGPD, Bill 2338/2023) and abroad (GDPR, EU AI Act, CCPA), along with leading international technical standards.
This document describes the AI governance program at OORT Labs and should be read together with the OORT Flows Privacy Policy and the Terms of Use. It does not replace legal advice specific to the Customer's use case.
1.Our Commitment
OORT Labs builds enterprise AI agents on the premise that trust, privacy and security are not optional. Every AI capability in OORT Flows โ assistants, autonomous agents and multi-agent orchestrations โ is designed, operated and audited under a formal Responsible AI program.
The program covers the full system lifecycle: use case definition, data collection and processing, model provider selection, deployment, production monitoring and decommissioning โ ensuring compliance with applicable regulations in Brazil and abroad.
2.Responsible AI Principles
- Human-centric โ AI systems support human decisions and preserve meaningful oversight in high-impact cases.
- Privacy by design โ personal data protection is a product requirement, not an add-on.
- Security by default โ encryption in transit and at rest, tenant isolation and least privilege.
- Transparency and explainability โ flows, models and data sources are auditable by the Customer.
- Fairness and non-discrimination โ periodic assessment of bias and impact on protected groups.
- Accountability โ clear roles between OORT (Processor) and Customer (Controller).
- Technical robustness and safety โ regression testing, red-teaming and controls against prompt injection, exfiltration and abuse.
- No use of Customer data to train our models โ content processed in flows is not used to train OORT models.
3.Brazilian Regulations
3.1.LGPD โ Law nยบ 13.709/2018
OORT Labs fully adheres to the Brazilian General Data Protection Law. We implement:
- Documented legal bases for each processing purpose (LGPD art. 7 and art. 11);
- Record of Processing Activities (RoPA) and Data Protection Impact Assessment (DPIA) when applicable;
- Designated Data Protection Officer (DPO) and a data subject support channel;
- Response to data subject requests (access, correction, anonymization, portability and erasure) within legal deadlines;
- Notification to ANPD and data subjects in case of a relevant incident.
3.2.Bill 2338/2023 โ Brazilian AI Act
Bill 2338/2023, currently before the Brazilian Congress, proposes the AI regulatory framework in Brazil. OORT Labs anticipates its obligations and structures OORT Flows around its main pillars:
- Risk classification of systems (excessive, high, medium, low) with proportionate controls;
- Preliminary risk assessment and Algorithmic Impact Assessment (AIA) for high-risk use cases;
- Right to explanation of automated decisions that affect the data subject;
- Right to human review of automated decisions with significant impact;
- Right to non-discrimination and bias mitigation;
- Active transparency about the use of AI and clear labeling of synthetic content;
- Internal governance with a designated AI lead, event logging and incident response plan.
OORT Flows ships native mechanisms that help the Customer meet these obligations, including risk-level classification of flows, auditable execution logs, configurable human-approval steps and per-agent usage reports.
3.3.Brazilian Internet Framework and Other Regulations
We also comply with Law 12.965/2014 (Marco Civil da Internet), Decree 8.771/2016 and applicable sectoral regulations (BACEN, CVM, ANS, CFM, OAB, among others) based on each Customer's context.
4.International Regulations
4.1.GDPR โ Regulation (EU) 2016/679
For data subjects in the European Economic Area, we fully observe the GDPR:
- Legal bases under art. 6 and art. 9;
- Data Processing Agreements (DPAs) with sub-processors;
- Standard Contractual Clauses (SCCs) and Transfer Impact Assessments for international transfers;
- Fulfillment of data subject rights (access, rectification, erasure, portability, objection, restriction);
- Breach notification within 72 hours to the competent authority, when applicable.
4.2.EU AI Act โ Regulation (EU) 2024/1689
OORT Labs aligns OORT Flows with the EU AI Act, the first horizontal AI regulation in the world, using a risk-based approach:
| Risk Category | OORT Flows Posture |
|---|---|
| Unacceptable risk (e.g. social scoring, subliminal manipulation) | Prohibited by design. Such cases cannot be deployed on the Platform. |
| High risk (e.g. HR, credit, critical infrastructure) | Support for AIA, mandatory human oversight, extended logs, technical documentation and Conformity Assessment. |
| Limited risk (e.g. chatbots, content generation) | Mandatory transparency to end users, identification of synthetic content and AI interaction labeling. |
| Minimal risk (e.g. operational automations) | Governance best practices and standard logs. |
For General-Purpose AI (GPAI) models accessed through integrated providers, we require the provider's technical documentation and compliance with transparency obligations under art. 53 et seq. of the Regulation.
4.3.CCPA / CPRA โ California, USA
For California residents, we observe the California Consumer Privacy Act (CCPA), as amended by the CPRA, including rights of access, deletion, correction, opt-out of "sale" or "sharing" and limits on the use of sensitive information.
4.4.Other Jurisdictions
Where applicable, we also observe the UK GDPR, PIPEDA (Canada), Law 25.326 (Argentina), Law 1581/2012 (Colombia), APPI (Japan) and PDPA (Singapore), among others.
5.Technical Frameworks and Standards
The OORT Labs governance program is built on internationally recognized frameworks:
- ISO/IEC 42001:2023 โ AI Management Systems (AIMS): foundation of our Responsible AI program.
- ISO/IEC 27001 / 27701 โ Information Security and Privacy Management.
- ISO/IEC 23894:2023 โ AI Risk Management.
- NIST AI Risk Management Framework (AI RMF 1.0) โ *Govern*, *Map*, *Measure*, *Manage* functions.
- OWASP Top 10 for LLM Applications โ controls for prompt injection, data leakage and model supply chain.
- SOC 2 Type II โ security, availability and confidentiality controls.
- CIS Benchmarks and MITRE ATLAS โ infrastructure hardening and threat modeling for AI systems.
6.Risk Classification and Impact Assessment
Every flow configured by the Customer is classified into risk levels compatible with the EU AI Act and Bill 2338. Classification determines:
- Need for an Algorithmic Impact Assessment (AIA) and/or Data Protection Impact Assessment (DPIA);
- Depth of execution logs and retention period;
- Mandatory human-in-the-loop review points;
- Restrictions on sensitive data categories and on specific model providers;
- Frequency of internal audits and bias testing.
AIA and DPIA templates are available in the OORT Flows Resource Center to support the Customer as Controller.
7.Governance and Human Oversight
- Internal Responsible AI Committee with representation from Engineering, Security, Legal, Product and the DPO.
- Designated AI technical lead with a direct channel to Customers and authorities.
- Configurable human approval steps in any flow, recording approver identity, decision and rationale.
- Acceptable AI Use Policy binding employees and Customers.
- Mandatory training for employees on privacy, security and AI ethics.
8.Transparency and Explainability
OORT Flows is designed so that the Customer knows what, how and why each agent does what it does:
- Each flow execution generates detailed logs with inputs, intermediate steps, model used, cost, latency and output;
- Each agent exposes the AI provider and model selected by the Customer;
- AI-generated content can be labeled as synthetic where required by applicable regulation;
- The Customer can export auditable logs for presentation to authorities or independent auditors;
- When an automated decision affects a data subject, an accessible explanation of the main criteria used can be generated.
9.Data Security and Privacy in AI Processing
- Encryption at rest with AES-256 and in transit with TLS 1.2+;
- Tenant isolation across all databases, queues and file storage;
- Integration credentials stored with Fernet/AES and periodic rotation;
- Mandatory MFA for administrators, SSO/SAML/OIDC available for enterprise tenants;
- Least privilege applied to users, agents and integrations;
- No prompt retention at AI providers with *zero data retention* contracts where supported;
- Sensitive data filters (PII, PHI, secrets) applied before sending to external models;
- Continuous security testing: SAST, DAST, SCA, secret scanning and LLM-specific red-teaming;
- Incident response plan with notification SLA aligned to LGPD/GDPR.
10.Data Subject Rights and Contact
Data subjects may exercise their rights at any time. Requests related to AI systems โ explanation, human review, contestation โ are prioritized by our Responsible AI Committee.
- Data Protection Officer (DPO): [dpo@oortlabs.com]
- AI technical lead / Responsible AI Committee: [ai-governance@oortlabs.com]
- General privacy: [privacy@oortlabs.com]
- Security and incidents: [security@oortlabs.com]
This Policy is reviewed periodically. Material changes are communicated to the Customer with reasonable notice and reflected in the version information above.