Back to Blog

How to Deploy Luminati Proxy Manager on Kubernetes (The Right Way)

Kieran
July 4, 2025
3 min read

So, you've decided to self-host the open-source Luminati Proxy Manager (LPM) on Kubernetes. You've probably found, as I did, that the official documentation is severely lacking and doesn't prepare you for the realities of a production environment.

After running this at a scale of over 4 million requests per day, I've battled the memory leaks, figured out a stable configuration, and built the setup guide I wish I'd had from the start. This is a production-ready, battle-tested approach that focuses on static configuration for an ephemeral environment.

The Core Philosophy: Static & Ephemeral

The biggest mistake you can make is trying to manage the LPM configuration through its UI in a Kubernetes environment. Pods can be restarted or moved at any time. Your configuration needs to be stateless and read-only from the moment the pod starts.

We will achieve this by mounting the entire .luminati.json configuration as a "ConfigMap". This makes our deployment predictable and version-controlled with your code.

Step 1: The Heart of the Setup - The "ConfigMap"

First, we define our entire proxy configuration in a single "ConfigMap". This includes your authentication details and, most importantly, your list of external proxies. This ensures your setup is declarative and version-controlled.

Key things to note:

  • The _defaults section contains your authentication. Use a strong, generated lpm_token and password for security.
  • Your external proxies are hardcoded in the ext_proxies array.
# config-map.yaml
apiVersion: v1
kind: ConfigMap
metadata:
  name: luminati-configmap
  namespace: your-namespace # <-- Change this
data:
  .luminati.json: |
    {
      "_defaults": {
        "customer": "YOUR_CUSTOMER_NAME",
        "lpm_token": "YOUR_SECRET_TOKEN",
        "password": "YOUR_PROXY_PASSWORD"
      },
      "proxies": [
        {
          "ext_proxies": [
            "user1:pass1@provider.com:1234",
            "user2:pass2@provider.com:1235"
          ],
          "port": 24000,
          "preset": "rotating"
        }
      ]
    }

Step 2: The "Deployment" Manifest

Next, the Deployment. This is fairly standard, but we're including podAntiAffinity and topologySpreadConstraints as a best practice to ensure our pods are spread across different nodes for high availability.

(Note: You would need to add the volumeMounts and volumes section to your base deployment to mount the ConfigMap from Step 1.)

# deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: luminati-proxy-dep
  namespace: your-namespace # <-- Change this
spec:
  template:
    spec:
      affinity:
        podAntiAffinity:
          preferredDuringSchedulingIgnoredDuringExecution:
            - weight: 100
              podAffinityTerm:
                labelSelector:
                  matchLabels:
                    app: luminati-proxy-dep
                topologyKey: "kubernetes.io/hostname"
      topologySpreadConstraints:
        - maxSkew: 1
          topologyKey: "kubernetes.io/hostname"
          whenUnsatisfiable: ScheduleAnyway
          labelSelector:
            matchLabels:
              app: luminati-proxy-dep

Step 3: Exposing Ports with 'Services'

Instead of one large, confusing Service, a cleaner pattern is to create a separate Service for each port you need to expose. This makes managing traffic and applying rules much simpler. Here, we expose the admin panel and the main proxy port separately.

# service.yaml
apiVersion: v1
kind: Service
metadata:
  name: luminati-proxy-admin
  namespace: your-namespace # <-- Change this
spec:
  type: NodePort
  ports:
    - port: 22999
      name: admin-panel
      targetPort: 22999
  selector:
    app: luminati-proxy-dep
---
apiVersion: v1
kind: Service
metadata:
  name: luminati-proxy
  namespace: your-namespace # <-- Change this
spec:
  type: NodePort
  ports:
    - port: 24000
      name: proxy-port
      targetPort: 24000
  selector:
    app: luminati-proxy-dep

Step 4: The Secret Weapon - Taming the Memory Leak

Now for the most important part, which you won't find in the official docs. The Luminati Proxy Manager has a severe memory leak. When run at scale, its memory usage will climb continuously until it crashes the pod.

The only reliable, production-safe workaround I found was to kill it before it kills itself.

This CronJob automatically performs a graceful rolling restart on the deployment every 4 hours. This prevents memory from growing to critical levels and keeps the service stable long-term. This is a necessary evil to make LPM usable in production.

# job.yaml
### this job is a workaround for luminati memory leak.
### it restarts the deployment every 4 hours
apiVersion: batch/v1
kind: CronJob
metadata:
  name: restart-luminati-deployment
  namespace: your-namespace # <-- Change this
spec:
  schedule: "0 */4 * * *" # Runs every 4 hours
  jobTemplate:
    spec:
      template:
        spec:
          restartPolicy: Never
          containers:
            - name: kubectl
              image: bitnami/kubectl:latest
              command:
                - /bin/sh
                - -c
                - |
                  echo "Restarting deployment luminati-proxy-dep";
                  kubectl rollout restart deployment luminati-proxy-dep -n your-namespace;
                  echo "Deployment restarted successfully.";
          serviceAccountName: restart-job-sa

(Note: This requires a ServiceAccount, Role, and RoleBinding to grant permissions, which are not shown here for brevity but are required for the job to run.)

Conclusion: It Works, But At What Cost?

This setup will give you a stable, production-ready Luminati Proxy Manager deployment in Kubernetes. It's resilient, configured via code, and handles the infamous memory leak.

However, the fact that we need a cron job to forcibly restart the deployment every few hours highlights the core issue: the operational overhead is significant. You are spending engineering time to fight the tool, not get work done.

It was this exact frustration that led me to build ProxySentinel—a modern, stable, and managed alternative where these problems are solved out of the box. If you're tired of wrestling with your tools, give our 7-day free trial a look.

Tags:

kubernetes
luminati
proxy manager
devops
web scraping
bright data