Skip to main content

privatemode.ai Threat Profile

Security Model

What privatemode.ai Protects Against

ThreatProtection Method
Cloud Provider AccessTEE hardware isolation prevents infrastructure access
Service Provider AccessEnd-to-end encryption with client-controlled keys
Data LoggingArchitecturally impossible to log plaintext data
Model ExtractionIsolated execution prevents model theft
Side ChannelsTiming randomization and memory isolation
Network InterceptionTLS + TEE encryption provides dual protection

Trust Assumptions

  • Hardware manufacturers (Intel, AMD) implement TEEs correctly
  • Cryptographic primitives remain secure
  • Client-side attestation verification is performed

Compliance Framework

Data Protection

  • GDPR Article 25: Privacy by design through TEE architecture
  • GDPR Article 32: Technical measures via hardware encryption
  • HIPAA: Eligible for processing protected health information
  • CCPA: No data retention meets California privacy requirements

Certifications

  • SOC 2 Type II (in progress)
  • ISO 27001 compliance
  • Regular third-party security audits

Residual Risks

Partially Mitigated

  • Advanced side-channel attacks (research ongoing)
  • Sophisticated timing analysis
  • Hardware vulnerabilities (requires vendor patches)

Operational Considerations

  • Requires proper client-side attestation verification
  • Performance overhead of 5-10% for encryption
  • Dependency on hardware TEE availability

Best Practices

  1. Always verify attestation before sending sensitive data
  2. Use application-level encryption for defense in depth
  3. Monitor service health and attestation status
  4. Rotate API keys regularly
  5. Implement rate limiting to prevent abuse

External Resources


Return to privatemode.ai Overview