Skip to main content

How confidential computing works

When you send data to a cloud server, it gets encrypted in transit and at rest. But during processing, it sits in RAM unencrypted. Anyone with access to that server - cloud admins, malicious insiders, someone with physical access - could theoretically read it.

Confidential computing fixes this. Your data stays encrypted even while being processed, using hardware-enforced isolation called a Trusted Execution Environment (TEE).

The key concepts

Trusted Execution Environments (TEEs) are hardware-secured areas where the processor itself enforces isolation. Even the cloud provider's admins can't peek inside. Intel SGX, AMD SEV-SNP, and NVIDIA's H100 confidential computing all work this way.

Remote attestation lets you verify, before sending sensitive data, that you're actually talking to a real TEE running the expected code. The secure processor cryptographically signs a measurement of what's running. You check that signature against certificates from the hardware vendor.

End-to-end encryption means data is encrypted on your device, stays encrypted during transit, gets decrypted only inside the TEE, and leaves encrypted. The path from you to the enclave is sealed.

What TEEs protect against

  • Cloud provider snooping - admins can't access TEE memory
  • Compromised host OS - malware on the server can't reach the enclave
  • Physical attacks - memory encryption defeats cold boot attacks
  • Other tenants - workloads on the same hardware are isolated

What TEEs don't protect against

  • Side-channel attacks (partially mitigated, research ongoing)
  • Bugs in the code running inside the enclave
  • A compromised hardware vendor
  • You not actually verifying attestation before sending data

Enclava's approach

Enclava uses two TEE-based inference providers:

  • NVIDIA H100 GPUs with hardware memory encryption for high-performance inference
  • Private Mode and Redpill for TEE-protected LLM processing

For details on specific implementations, see NVIDIA enclaves and inference providers.