Container-based Installation - Orchestrated Environment
Information
The individual components of PLANTA Project Manager, Worker, Customizing (Database), Web Client and optionally Pulse are provided as container images in Docker container format and installed via Helm in a Kubernetes environment.
Manager is responsible for database access as well as business logic, data processing, and API services.
Worker establishes client and web client connections and runs background processes.
Customizing is an initialization job and contains standard customizing and customer-specific configurations. It prepares the database and then terminates.
Web Client provides the PLANTA Project user interface in the web browser.
Pulse (optional) is the agile PLANTA component for task management and collaboration.
For production workloads, we recommend placing the Secure component in front as a reverse proxy (TLS termination, Single Sign-On with OIDC, etc.).
The container images can be deployed anywhere where an OCI-compatible container runtime or platform is available. In an orchestrated environment, operation takes place on a Kubernetes cluster.
The components are created as Deployments or Jobs with the required Services, PersistentVolumeClaims, and Ingress resources. Together they form a complete PLANTA project management environment.
What is Helm?
Helm is a package manager for Kubernetes that allows applications to be packaged, distributed, and installed or updated with a single command. Instead of manually creating multiple Kubernetes resource definitions, the application is provided as a so-called Chart.
Key terms:
Chart: A package that describes all required Kubernetes resources for an application.
Release: An installed instance of a Chart in a Kubernetes cluster.
Values: Configuration values used to customize a Release (e.g. database access, license information, resource limits).
For more information about Helm, see the official documentation at https://helm.sh/docs/.
Prerequisites
The following prerequisites must be met before installing in an orchestrated environment:
Kubernetes cluster (version 1.21 or higher)
Suitable options include AKS, EKS, GKE, OpenShift, or on-premises Kubernetes.Helm (version 3.x or higher)
Helm installation guidekubectl with access to the cluster
Verify with:kubectl cluster-infoAccess to the PLANTA OCI Registry
Registry URL:
registry.planta.servicesLogin instructions: https://help.planta.de/en/tec/Container-ab-39.5.24/anmeldung-planta-oci-registry
Supported database instance
For details see System Requirements and Platforms → Section Database Requirements:PLANTA license
Either as a license number or as a license file.Persistent storage in the cluster
For shared data: support for the ReadWriteMany (RWX) access mode.
Ingress controller for production environments
e.g. NGINX, Traefik, or OpenShift Routes.
Download Helm Chart from the PLANTA Registry
All container images and the Helm Chart are available in the PLANTA Registry at https://registry.planta.services. Access credentials are provided by PLANTA. For login details see: https://help.planta.de/en/tec/Container-ab-39.5.24/anmeldung-planta-oci-registry
Option 1: Install Helm Chart directly
Example:
helm install my-planta oci://registry.planta.services/planta/helm \
--version 1.0.0 \
--namespace planta \
--create-namespace \
--values my-values.yaml
Option 2: Pull Helm Chart locally first
# Download and unpack chart locally
helm pull oci://registry.planta.services/planta/helm \
--version 1.0.0 --untar
# Install from local chart
helm install my-planta ./helm \
--namespace planta \
--create-namespace \
--values my-values.yaml
Deployment in the Kubernetes Cluster
Create namespace (if not created via Helm):
CODEkubectl create namespace plantaCreate configuration file
Create or adjustmy-values.yamlwith all required parameters (database, license, resources, Ingress, etc.), see the section "Helm Chart Configuration".Install Helm Chart:
CODEhelm install my-planta oci://registry.planta.services/planta/helm \ --namespace planta \ --values my-values.yaml \ --timeout 10mMonitor installation:
CODE# Watch pods kubectl get pods -n planta -w # Check release status helm status my-planta -n planta
Helm Chart Configuration
Container configuration in the orchestrated environment is done – analogously to the standalone environment with Docker Compose – via environment variables. In Kubernetes/Helm these are set either directly via values.yaml or via Kubernetes Secrets.
An overview of all available parameters can be found in the topic Server and Client Configuration.
Minimum required configuration
Create a copy of the values.yaml file named my-values.yaml and set the following parameters:
Manager
planta__server__hibernate__connection__urlplanta__server__hibernate__connection__usernameplanta__server__hibernate__connection__passwordplanta__server__ppms_license
Pulse (if used)
MONGO_URLMONGO_OPLOG_URL
These parameters correspond in content to the parameters from the Docker Compose templates and are passed in the Chart as environment variables to the respective containers.
Database Configuration
Configuration via environment variables (e.g. test/development systems)
manager:
env:
# Example Oracle
planta__server__hibernate__connection__url: "jdbc:oracle:thin:@hostname:1521/servicename"
planta__server__hibernate__connection__username: "planta_user"
planta__server__hibernate__connection__password: "your-password"
Configuration via Kubernetes Secrets (recommended for production systems)
For database credentials and other sensitive configurations, Kubernetes Secrets should be used. A complete example can be found in secret-example.yaml.
Example reference in values:
manager:
secrets:
userSecretName: "planta-db-secret"
For further details on available parameters see Server and Client Configuration.
License Configuration
Licensing requires two things: the license number set as an environment variable, and the license.conf file supplied via one of the options below.
License number as environment variable
manager:
env:
planta__server__ppms_license: "your-license-number"
License File: Option 1 - User-Managed Secret
Variant with your own Secret:
manager:
license:
enabled: true
secretName: "your-license-secret"
License File: Option 2 - Chart-Managed Secret
Variant where the Chart automatically creates the Secret from a local license file:
manager:
license:
enabled: true
path: "license.conf"
Image Configuration and Registry Access
By default, images are pulled from the PLANTA Registry. An ImagePullSecret is used for this:
imagePullSecrets:
- name: registry-credentials
Create Secret:
kubectl create secret docker-registry registry-credentials \
--docker-server=registry.planta.services \
--docker-username=YOUR-USERNAME \
--docker-password=YOUR-PASSWORD \
--namespace planta
For detailed authentication instructions see Registry documentation Login PLANTA OCI Registry .
Accessing PLANTA
After successful installation, check the pod status:
kubectl get pods -n planta
All relevant pods should have status Running. Access to the Web Client in production environments is via the configured Ingress or route.
For testing purposes without Ingress, port forwarding can be used:
# Access the Web Client
kubectl port-forward svc/my-planta-webclient 5000:5000 -n planta
# Open in browser:
# http://localhost:5000
Updating (Upgrading) an Existing Installation
Procedure:
Adjust configuration file
my-values.yamlto new requirements or chart version.Update release:
CODEhelm upgrade my-planta oci://registry.planta.services/planta/helm \ --namespace planta \ --values my-values.yaml \ --version 1.1.0Monitor upgrade:
CODEkubectl rollout status deployment/my-planta-manager -n planta kubectl get pods -n planta
Uninstallation
To remove a PLANTA installation from the cluster:
# Uninstall the Helm release
helm uninstall my-planta -n planta
# Optionally, delete the namespace and all resources
kubectl delete namespace planta
Advanced Configuration
Resource Limits (CPU/RAM)
Example configuration:
manager:
resources:
requests:
memory: "2Gi"
cpu: "1000m"
limits:
memory: "4Gi"
cpu: "2000m"
webclient:
resources:
requests:
memory: "512Mi"
cpu: "250m"
limits:
memory: "1Gi"
cpu: "500m"
For sizing recommendations see System Requirements and Platforms.
Storage Classes and Persistence
Define storage classes for persistent volumes:
storageClassName: "fast-ssd" # Global storage class
# Per-component storage configuration
transfer:
persistence:
size: 1Gi
accessMode: ReadWriteMany
# Optionally override storage class for shared volume
storageClassName: ""
webclient:
persistence:
size: 5Gi
accessMode: ReadWriteOnce
pulse:
persistence:
size: 10Gi
accessMode: ReadWriteOnce
Self-managed PVCs
By default, the Chart automatically creates PersistentVolumeClaims and mounts them into every Deployment/Job for each component that requires persistent storage. If you want full control – for example to set custom labels, annotations, or volume binding modes, or to reuse pre-provisioned volumes – you can disable automatic PVC creation per component.
When persistence.create: false:
The PVC resource will not be created by the Chart.
You provide the volume yourself via
additionalVolumes/additionalVolumeMountsreferencing the PVC you created.
Using custom PVCs
If PVCs should not be created automatically by the Chart:
transfer:
persistence:
create: false
manager:
additionalVolumes:
- name: my-transfer
persistentVolumeClaim:
claimName: my-company-transfer-pvc
additionalVolumeMounts:
- name: my-transfer
mountPath: /mnt/transfer
readOnly: true
worker:
additionalVolumes:
- name: my-transfer
persistentVolumeClaim:
claimName: my-company-transfer-pvc
additionalVolumeMounts:
- name: my-transfer
mountPath: /mnt/transfer
readOnly: true
customizing:
additionalVolumes:
- name: my-transfer
persistentVolumeClaim:
claimName: my-company-transfer-pvc
additionalVolumeMounts:
- name: my-transfer
mountPath: /mnt/transfer
readOnly: false
Analogously for Web Client and Pulse:
webclient:
persistence:
create: false
additionalVolumes:
- name: my-webclient-data
persistentVolumeClaim:
claimName: my-company-webclient-pvc
additionalVolumeMounts:
- name: my-webclient-data
mountPath: /var/planta
pulse:
persistence:
create: false
additionalVolumes:
- name: my-pulse-data
persistentVolumeClaim:
claimName: my-company-pulse-pvc
additionalVolumeMounts:
- name: my-pulse-data
mountPath: /app/data
Note: When
create: falseis set, PVCs will not be automatically deleted during uninstallation.
Pod Placement (Node Affinity, Tolerations)
manager:
nodeSelector:
disktype: ssd
tolerations:
- key: "dedicated"
operator: "Equal"
value: "planta"
effect: "NoSchedule"
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/zone
operator: In
values:
- zone-1
- zone-2
Additional Volumes
manager:
additionalVolumes:
- name: custom-config
configMap:
name: my-custom-config
additionalVolumeMounts:
- name: custom-config
mountPath: /etc/custom
readOnly: true
Session Affinity (Sticky Sessions)
Session affinity can be configured for scaled Web Client deployments.
Service-side ClientIP affinity
webclient:
service:
sessionAffinity:
enabled: true
timeoutSeconds: 3600
Ingress-side cookie-based affinity (e.g. NGINX)
For browser traffic, cookie-based sticky sessions are more reliable. These are configured via global ingress.annotations and apply to all Ingress resources.
ingress:
enabled: true
annotations:
nginx.ingress.kubernetes.io/affinity: "cookie"
nginx.ingress.kubernetes.io/affinity-mode: "balanced"
nginx.ingress.kubernetes.io/session-cookie-name: "PLANTASESSIONID"
nginx.ingress.kubernetes.io/session-cookie-expires: "3600"
nginx.ingress.kubernetes.io/session-cookie-max-age: "3600"
nginx.ingress.kubernetes.io/session-cookie-path: "/"
nginx.ingress.kubernetes.io/session-cookie-samesite: "Lax"
nginx.ingress.kubernetes.io/session-cookie-secure: "true"
nginx.ingress.kubernetes.io/session-cookie-change-on-failure: "true"
For other Ingress controllers (Traefik, HAProxy, etc.) adjust the annotations accordingly. The correct annotation keys can be found in your Ingress controller's documentation.
Note: Cookie-based affinity is processed by the Ingress controller and takes precedence over ClientIP affinity for external traffic. Both mechanisms can safely coexist – cookie affinity handles external/browser traffic, while ClientIP covers internal or direct service access.
Sticky Sessions at the Ingress level only affect routing from browser → Webclient, not the internal connection from Webclient → Worker (which uses ClientIP affinity on the Worker service).
Custom Ingress/Route Definitions
If Ingress resources are to be managed externally (e.g. via GitOps), the built-in Ingress creation in the Chart can be disabled and custom Ingress definitions created. The Services created by the Chart then serve as targets for these Ingress or Route resources.
Exposing the Worker Port Outside the Cluster (Development / Troubleshooting)
Warning: Do NOT use this in production environment. For production environment, the PLANTA .NET native client should connect via PLANTA secure (port 8181 on the manager), which provides proper TLS encryption. Exposing the raw worker TCP port externally is only appropriate for local development or troubleshooting scenarios.
The PLANTA .NET native client communicates with the worker via a raw TCP connection on the worker's session port. By default, this port can only be accessed from within the cluster. To access it from outside the cluster (e.g., from a developer workstation), you can enable an additional NodePort or LoadBalancer Service:
worker:
service:
external:
enabled: true
type: NodePort
nodePort: 30001 # leave empty for a Kubernetes-assigned port
When enabled, an extra Service named <release>-worker-external is created alongside the existing internal ClusterIP Service. The internal service remains unaffected.
LoadBalancer example (e.g., for cloud providers):
worker:
service:
external:
enabled: true
type: LoadBalancer
externalTrafficPolicy: Local
# loadBalancerIP: "1.2.3.4" # optional static IP
After deploying, retrieve the assigned port or IP:
kubectl get svc <release>-worker-external -n planta
Disabling Components
pulse:
enabled: false
webclient:
enabled: false
Customizing Deployments (Export/Import)
PLANTA supports customizing deployments (changes that can be transferred between environments via export and import, e.g. from a development system to production) via a separate Kubernetes job based on the Manager image. An example of a job definition can be found in cu-deployment-example.yaml. Copy this file as a starting point and adapt it to your environment.
A detailed description of customer customizing deployment can be found here.
Prerequisites
Running PLANTA installation with reachable database
PLANTA Registry access for the Manager image
Shared volume for
.parand.zipfilesDatabase credentials (preferably via Secret; see
secret-example.yaml)
Important: The Customizing Deployment Job establishes a direct connection to the database. Ensure that no concurrent operations are running on the same database during the export or import.
How It Works
Procedure:
The Job runs the PLANTA Manager container with a special export or import argument. The container:
Connects to the database using the provided credentials.
Reads the
.parfile that specifies which customizing objects are to be exported or imported.
Writes an exported.zipfile to the mounted volume (export mode) or reads a.zipfile and applies it to the database (import mode).Terminates upon completion – Kubernetes marks the Job as succeeded or failed.
Apply job:
kubectl apply -f cu-deployment.yaml -n YOUR_NAMESPACE
Monitor job:
# Check job status
kubectl get jobs -n YOUR_NAMESPACE
# Follow the job logs
kubectl logs job/cu-deployment -n YOUR_NAMESPACE -f
The Job is configured as follows:
30-minute timeout (
activeDeadlineSeconds: 1800) – the Job is terminated if this duration is exceededNo retries (
backoffLimit: 0) – a failed run is not automatically retriedAutomatic cleanup after 1 hour (
ttlSecondsAfterFinished: 3600) – the completed/failed Job resource is automatically removed
To run the Job again, first delete the previous Job:
kubectl delete job cu-deployment -n YOUR_NAMESPACE
kubectl apply -f cu-deployment.yaml -n YOUR_NAMESPACE
Use Secrets for credentials in production. Avoid inserting database passwords directly into the Job YAML. Instead, create a Secret (see secret-example.yaml) and reference it via envFrom.secretRef.
Back up your database before running an import, especially in production environments.
The
.parfile must be accessible inside the container at the path specified inargs. It must be placed in the directory mounted at /var/planta/export/.Volume type: The example uses hostPath, which is suitable for single-node clusters and local development. For production or multi-node clusters, use an appropriate volume type (e.g. NFS, PersistentVolumeClaim, or cloud-specific storage).
Image tag: Update the image field according to the PLANTA version of your installation.
Troubleshooting
Pods not starting
kubectl get pods -n planta
kubectl describe pod <pod-name> -n planta
kubectl logs <pod-name> -n planta
Typical causes:
ImagePullBackOff: Check ImagePullSecret or Registry access.CrashLoopBackOff: Check application logs for errors.Pending: Check for insufficient resources or PVC binding issues.
Database connection fails
Check the database connection:
kubectl logs -n planta deployment/my-planta-manager
kubectl run -it --rm debug --image=busybox \
--restart=Never -n planta -- sh
# inside container:
# telnet database-host 1521
Persistent Volume issues
Check PVC status:
kubectl get pvc -n planta
kubectl describe pvc <pvc-name> -n planta
kubectl get storageclass
If PVCs are pending, ensure that:
Storage class exists: kubectl get storageclass
Sufficient storage is available
Access mode is supported (in particular ReadWriteMany)
Ingress not working
kubectl get ingress -n planta
kubectl describe ingress <ingress-name> -n planta
Check whether
an Ingress controller is running
DNS is configured correctly and
TLS certificates are valid if applicable
Show current Helm values
helm get values my-planta -n planta
or
helm get values my-planta -n planta --all