Customer Customizing Deployment
Info
The customizing delivery procedure serves to roll out the customizing, e.g. from a development system at the customer's facility to his/her productive system. As a result, the customizing in the target system is completely replaced by an export from the source system using the Database Export/Import function.
Requirements
Source and target system must be congruent, i.e.
the program status of both systems must be identical and all migration packets must have the same status
the database schema of the target system must have the same status as the source system
A customizing deployment generally consists of the following elements:
SQL files (Example:
sql_planta_1_1.sql)Python directory of the server (example:
py.zip)Customizing deployment files (Example:
cu_deployment_2015_02_04.zip)
Export Procedure
The export step is necessary if you carry out the deployment internally yourself. If you have any questions, please contact your PLANTA consultant.
Information
For the export, there is a parameter file that provides all necessary data.
The file is located in the customizing image at
config/export/customizing_deployment_venus.par.It can be accessed from the server via the transfer volume at
/mnt/transfer/config/export/customizing_deployment_venus.par.
Example of docker compose for export
services:
manager:
image: registry.planta.services/project/manager:latest
environment:
- "SKIP_INIT=1"
- "planta__server__ppms_license=<lizenz nummer>"
- "planta__server__hibernate__connection__url=<db url>"
- "planta__server__hibernate__connection__username=<db user>"
- "planta__server__hibernate__connection__password=<db password>"
command: ["export", "@/mnt/transfer/config/export/customizing_deployment_venus.par --output-file /var/planta/export/cu_deployment.zip"]
depends_on:
customizing:
condition: service_completed_successfully
restart: true
networks:
- internal
volumes:
- transfer:/mnt/transfer:ro
- ./export:/var/planta/export:rw
worker:
image: registry.planta.services/project/worker:latest
restart: unless-stopped
environment:
- "SESSION_LINK=manager:54242"
depends_on:
manager:
condition: service_healthy
restart: true
networks:
- internal
volumes:
- transfer:/mnt/transfer:rw
customizing:
image: registry.planta.services/project/customizing:latest
environment:
- LOG_LEVEL=INFO
volumes:
- transfer:/mnt/transfer:rw
networks:
internal:
volumes:
transfer:
Import Procedure
For information
The import procedure comprises several steps.
The associated parameter files are located in the
config/import/customizing_deploymentsubdirectory.They can be accessed from the server via the transfer volume at
/mnt/transfer/config/import/customizing_deployment/*.
File | Description | Note |
|---|---|---|
| Involves the complete replacement of the | Mandatory part |
| Completely replaces the module variant. This entirely overwrites tables 500-503. | optional |
| Additionally provides the I-texts of the | |
| Completely replaces the interface tables For DB 39.4.4.0 | optional |
| Completely replaces the interface tables For DB 39.5.x | optional |
| Replaces the process models. | optional |
Step 1: To do before deployment
Stop the PLANTA service.
Step 2: Run the SQL files
Run the delivered SQL file(s) in a database management tool of your choice.
Step 3: Import of customizing deployment files
PLANTA delivers a zip archive including the customizing deployment files (example:
cu_deployment_2015_02_04.zip).This zip archive does not need to be unpacked. Unpacking is carried out automatically by PLANTA's import procedure.
Mount the zip archive located at
/var/plantain the manager container.The docker compose must be run multiple times in a row while commenting in the respective step:
Example of docker compose for import
services:
manager:
image: registry.planta.services/project/manager:latest
environment:
- "SKIP_INIT=1"
- "planta__server__ppms_license=<lizenz nr>"
- "planta__server__hibernate__connection__url=<db url>"
- "planta__server__hibernate__connection__username=<db user>"
- "planta__server__hibernate__connection__password=<db password>"
command: [ "import", "@/mnt/transfer/config/import/customizing_deployment/01_replace_q1b_q2b.par --input-file /var/planta/export/cu_deployment.zip" ]
#command: [ "import", "@/mnt/transfer/config/import/customizing_deployment/02a_replace_module_variants.par --input-file /var/planta/export/cu_deployment.zip" ]
#command: ["import", "@/mnt/transfer/config/import/customizing_deployment/02b_add_module_variant_itexts.par --input-file /var/planta/export/cu_deployment.zip"]
#command: [ "import", "@/mnt/transfer/config/import/customizing_deployment/03_replace_interfaces_venus.par --input-file /var/planta/export/cu_deployment.zip" ]
#command: ["import", "@/mnt/transfer/config/import/customizing_deployment/04_replace_process_models.par --input-file /var/planta/export/cu_deployment.zip"]
depends_on:
customizing:
condition: service_completed_successfully
restart: true
networks:
- internal
volumes:
- transfer:/mnt/transfer:ro
- ./export:/var/planta/export:rw
worker:
image: registry.planta.services/project/worker:latest
restart: unless-stopped
environment:
- "SESSION_LINK=manager:54242"
depends_on:
manager:
condition: service_healthy
restart: true
networks:
- internal
volumes:
- transfer:/mnt/transfer:rw
customizing:
image: registry.planta.services/project/customizing:latest
environment:
- LOG_LEVEL=INFO
volumes:
- transfer:/mnt/transfer:rw
networks:
internal:
volumes:
transfer:
Step 4: Import new files into the Python directory of the server
Unzip the zip-archive which contains the Python directory.
All subdirectories of the new "py" directory are to be copied to the mounted "py" directory of the server..
If a custom image has been built, this step is omitted.
Step 5: To do after deployment
Start the PLANTA service.
The target directory will be regenerated with current POJO classes.
Notes
If the contents of the data table that possess an automatic number are transferred but their counter values are not, the counter values must be adjusted manually afterwards.
If history data tables (_HIS_) are copied during import, the HIBERNATE_SEQUENCE must be set to the highest level of the data available in the history tables.
If interface data is exchanged, the archived interface configurations will be lost.
This includes all logging files, parameters, as well as all pool records!
The PLANTA import does not delete any interface mappings, but only imports new or updates existing mappings. The mappings in question must be deleted manually after import.
If import errors occur due to data corruption, the Database Consistency Check module can be used to check or correct the data.