Deploying Plane to Production

A production setup of Plane involves a number of pieces:

  • A Postgres database
  • One or more Plane controller deployments
  • One or more Plane drone machines
  • One or more Plane proxies
  • One or more Plane ACME DNS-01 receivers

Refer to the architecture overview for an overview of how these pieces fit together.

Prerequisite knowledge

Plane’s goal is to give developers a simple and useful abstraction over a number of technologies, including OCI containers, and Linux cgroups, ACME, TLS, and DNS.

We recommend having at least cursory knowledge of these technologies before deploying Plane in production.

Jamsocket logo

If you want to use Plane in your own cloud without the burden of operating it, Jamsocket (opens in a new tab) provides a managed platform for session backends that runs on top of Plane.

Jamsocket is built by the team that builds Plane, and can manage drones in your own AWS account. See Plane vs. Jamsocket for a comparison of the two.

Postgres database

The source-of-truth for all persisted information in Plane is a Postgres database. The database also acts as a real-time broadcast message bus between controller instances.

Plane is only as reliable as the database it uses, so we recommend using a High Availability (opens in a new tab) setup to avoid the database becoming a single point of failure.

Most core Plane functionality (including spawning backends and authorizing new clients to connect to them) flows through the database, so the latency between the controller(s) and the database can have a material impact on Plane’s performance.

Controller deployments

The Plane controller is a stateless HTTP service, and can be deployed like a regular twelve-factor (opens in a new tab) HTTP application server.

This can be done with Docker, Kubernetes, a managed container runtime, or just by running the plane drone process on a Linux machine.

If you are exposing the controller to the public internet, you should configure a reverse proxy to restrict traffic to the /pub/* path prefix.

Here’s an example of what this could look like if you use nginx as a reverse proxy:

http {
    upstream plane_controller {

    server {
        listen 443 ssl;

        ssl_certificate /path/to/your/certificate.pem;
        ssl_certificate_key /path/to/your/private/key.pem;

        location /pub/ {
            proxy_pass http://plane_controller;
            proxy_set_header Host $host;
            proxy_set_header X-Real-IP $remote_addr;
            proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
            proxy_set_header X-Forwarded-Proto $scheme;

        location / {
            return 404;


Unlike the controller, the Plane drone is not a simple stateless web server; it expects to run on a machine (or virtual machine) that exists for the dedicated purpose of being a drone. As such, it’s not compatible with being run in Kubernetes, which abstracts away the underlying machine.

Instead, the preferred way to run a drone is to create a new (virtual) machine with Docker installed, and run the Plane drone Docker image on it. The drone will connect to the controller, and register itself as available to run backends.

If your application allows users to run untrusted code in backends (including code generated by an LLM), you should take additional precautions:

  • Running code in a hardened runtime like gVisor (opens in a new tab).
  • Configuring network access to the minimum required for your app.
  • Adding instrumentation to the drone machine and setting up alerts for anomalous behavior.


Proxies should run close (network-wise) to the drones they are proxying for, to minimize latency. Proxies need to have a network configuration that allows them to connect to arbitrary ports on the drones in their cluster.

Proxies also need to be able to accept incoming connections from the public internet on port 443, and forward them to the drones. If a network is called, either the A record or an equivalent CNAME record for should point to the proxy. Additionally, if subdomains are used, the A record or CNAME record for * should also point to the proxies for that cluster.

The way you will configure this depends on whether you have more than one proxy, and whether you are using a managed load balancer.

If you have only one proxy, you can give it a static IP address, and point the A record for to that IP address.

If you have more than one proxy, you can use a managed network load balancer, and point the A record for to the load balancer’s IP address (or use a CNAME record provided by the load balancer).

Since Plane terminates TLS, your load balancer only needs to operate at OSI layer 3 or 4. A layer 7 load balancer is technically possible, but it will not be efficient and is not officially supported.

ACME DNS-01 receivers

Obtaining a certificate involves proving to a third party that you control the domain you are requesting a certificate for.

Plane implements the ACME DNS-01 (opens in a new tab) challenge type to obtain a TLS certificate for each proxy. Each proxy generates its own private key, which never leaves that proxy.

For this to work, the CNAME record for the _acme-challenge.<cluster name> subdomain needs to point to a domain like <cluster name> The NS record of <cluster name> needs to point do a domain whose A record is set to the public IP of the Plane ACME DNS-01 receiver. Port 53 (both TCP and UDP) on that IP must be open to the public internet.

Plane’s built-in DNS server exists only to serve the ACME DNS-01 challenge, which is required for proxies to update their certificates.

As an alternative to setting up Plane’s DNS server, you can obtain certificates for your application on your own and pass them in to the proxies on startup. Note that under this approach, Plane is not able to refresh certificates on its own.