Deploy Containers
Deploy a container application on the Privasys Developer Platform inside Enclave OS Virtual (TDX).
Container applications run inside Enclave OS Virtual on Intel TDX hardware, providing hardware-level memory encryption and attestation for standard Docker/OCI images.
Prerequisites
- A GitHub account (for authentication)
- A publicly accessible container image (from GitHub Container Registry, Docker Hub, or any OCI registry)
Submitting a container application
- Go to Dashboard > New Application
- Select Container as the application type
- Provide the container image reference (e.g.
ghcr.io/org/app:v1.0.0) - Configure the deployment settings (see below)
- Submit the application
Container settings
| Setting | Description |
|---|---|
| Image reference | Full image URI including tag or digest. Use a digest (@sha256:...) for reproducibility. |
| Port | The port your container listens on. Only one HTTP port is exposed through the RA-TLS proxy. |
| Environment variables | Key-value pairs injected at runtime. Secrets are delivered over the RA-TLS channel. |
| Encrypted storage | Toggle to enable an encrypted filesystem volume for persistent data. |
What happens on deployment
When a container application is deployed:
- Enclave OS Virtual boots on a TDX-enabled node
- The UEFI firmware measures the OS image into TDX RTMR registers
- The container image is pulled over an attested channel
- The container starts with the configured port, environment variables, and storage
- An RA-TLS proxy terminates TLS in front of your container, serving certificates that include TDX attestation evidence
- The deployment is complete when the container passes a health check
Attestation for containers
Container deployments are attested through TDX RTMR registers:
| Register | Measures |
|---|---|
| RTMR[0] | UEFI firmware |
| RTMR[1] | OS kernel and initrd |
| RTMR[2] | OS configuration and runtime |
| RTMR[3] | Application layer (container image digest, environment hash) |
Clients can verify the full boot chain from firmware through to your application. The RA-TLS certificate includes quotes signed by the TDX hardware, which can be validated against Intel's provisioning certificates.
Image best practices
Pin to a digest. Tags like :latest or :v1 can change. Using a digest (@sha256:abc123...) ensures that attestation measurements remain stable and verifiable.
Minimise the image. Smaller images reduce the attack surface. Use multi-stage builds and distroless or scratch base images where possible.
Do not embed secrets in the image. Use environment variables delivered via the platform's encrypted channel instead.
Encrypted storage
When encrypted storage is enabled, the container receives a mounted volume that is:
- Encrypted at rest using keys derived inside the enclave
- Bound to the enclave identity (sealed data)
- Transparent to the application (standard filesystem operations)
Data persists across container restarts on the same enclave but is inaccessible outside that specific enclave instance.
Networking
Traffic to and from the container flows through the RA-TLS proxy:
- Inbound: Clients connect over HTTPS. The proxy terminates TLS using the RA-TLS certificate (which embeds the TDX attestation quote). Requests are forwarded to your container on the configured port.
- Outbound: The container can make outbound connections. DNS resolution and egress work normally.
MCP tool discovery
Container applications can be exposed as MCP tool servers by adding a privasys.json manifest to their GitHub repository. When the platform detects this file, it reads the declared tools and surfaces them in the Developer Platform's AI Tools tab.
Adding a privasys.json manifest
Create a privasys.json file in the root of your repository:
{
"container_mcp": {
"tools": [
{
"name": "browse",
"description": "Browse a URL and return the page content.",
"inputSchema": {
"type": "object",
"properties": {
"url": {
"type": "string",
"description": "The URL to browse."
}
},
"required": ["url"]
}
}
]
}
}Each tool in the tools array follows the standard MCP tool format: a name, a human-readable description, and a JSON Schema inputSchema for the parameters.
How it works
- When you create or update a container application with a GitHub commit URL, the platform checks the repository for a
privasys.jsonfile. - If
container_mcpis present, the tool manifest is stored alongside the application. - The AI Tools tab in the Developer Platform displays your container's tools, parameter schemas, and connection instructions.
- AI agents can discover and call your container's API over attested connections — the same TDX attestation that protects all container traffic.
No code changes to your container are required. The privasys.json file is read by the platform at submission time, not at runtime.
Comparison with WASM MCP
| WASM | Container | |
|---|---|---|
| Tool source | Auto-derived from WIT types and doc comments | Declared in privasys.json manifest |
| Schema format | Generated JSON Schema from WIT | Hand-written JSON Schema |
| Tool invocation | Connect RPC to WASM function | HTTP to container endpoint |
| Attestation | SGX (per-function code hash) | TDX (VM boot chain) |
See MCP Tools for the WASM-specific MCP reference.