Skip to main content
Skip table of contents

ppms.llm

ppms.llm provides:

  • A prompt templating abstraction (PromptTemplate) that:

    • Loads template metadata (text, system prompt, temperature, response format) from the customizing configuration.

    • Renders Jinja2 templates using module context read via ModuleReader.

    • Calls an LLM provider (currently OpenAI, model gpt-4o) to obtain completions.

    • Persists the full request/response cycle to table 537 for traceability.

  • A lookup function (get_prompt) that resolves a template from a human-friendly python_id.

Classes

PromptTemplate

A prompt runner that loads a template by UUID, expands Jinja2 variables against a module’s data, manages a chat-like message list, invokes the LLM, and stores the result.

Tip: The templates (text, system_prompt) support Jinja2 and have access to a global ppms object for lookups.

Properties

Attribute

Type

Getter

Setter

Description

uuid

str

Identifier of the template.

provider

str

LLM provider; currently '1493' (OpenAI).

model

str

LLM model name; default 'gpt-4o'.

text

str

Jinja2 user prompt template.

system_prompt

str

Jinja2 system prompt template.

temperature

float

Sampling temperature passed to the LLM.

response_format

str

Dotted path to a response class (e.g., package.module.Schema).

messages

list[dict]

Chat message history sent to the LLM.

Methods

Function

Parameters

Return Value

Description

PromptTemplate.execute_with_context(module)

module: A PLANTA module object to read for expanding the prompt.

Executes the template against the given module context and returns the LLM response object.

Functions

Function

Parameters

Return Value

Description

get_prompt(python_id)

python_id: The Python ID of a prompt (DI 067116)

  • PromptTemplateor None

Fetches the associated PromptTemplate instance for the given python_id.

  • Supports both generic and model-specific prompts (analogous to SQL statements).

  • Note: Currently assumes a single template per prompt. Config-layer logic for provider/model-specific resolution is pending requirements.

Examples

1) Execute a template and get raw response

PY
from ppms.llm import get_prompt
from ppms import ppms

module = ppms.get_target_module()

prompt = get_prompt("your_prompt_python_id")
if not prompt:
    raise RuntimeError("Prompt not found")

response = prompt.execute_with_context(module=module)
if response is not None:
    ppms.ui_message_box(response.choices[0].message.content)
else:
    ppms.ui_message_box("No response (check llm_response field in table 537 for error details).")

2) Typed, reliable responses with pydantic

Use a Pydantic model as your response_format so the LLM must return valid, parseable JSON you can trust. Here’s an example schema using projects/tasks:

PY
# customer/pm/schemas.py
from enum import Enum
from pydantic import BaseModel
from typing import List, Optional

class LinkType(str, Enum):
    FS = "finish-to-start"
    SS = "start-to-start"
    FF = "finish-to-finish"
    SF = "start-to-finish"

class TaskLink(BaseModel):
    dependency_type: LinkType
    predecessor_id: str

class PlanTask(BaseModel):
    id: str
    title: str
    duration_days: int
    parent_id: Optional[str] = None
    dependencies: List[TaskLink] = []

class ProjectPlan(BaseModel):
    name: str
    tasks: List[PlanTask]

After setting DI 067141 for your prompt to customer.pm.schemas.ProjectPlan the response from executing the prompt will now be formatted according to the schema or None will be returned.

3) Jinja2 templating in prompts

The prompt templates use Jinja2 to dynamically inject module data and control the text flow.
You can include conditional logic, formatting, and even system variables (ppms.uvar_get) to make prompts flexible and context-aware.


Let’s say you want to evaluate the quality of a product loaded in your module.

System prompt:

CODE
You are a senior quality control engineer.
Analyze product information and provide an overall quality rating.
Output JSON with:
- quality_score (1–10)
- improvement_tips (list of strings)
Language: {{ ppms.uvar_get('@19') }}

User prompt:

CODE
{% set product = data.product[0] %}

Product: {{ product.name }}
Type: {{ product.category }}
Price: {{ product.price }} €

{% if product.defects %}
Known defects:
{% for d in product.defects %}
- {{ d.description }}
{% endfor %}
{% endif %}

Please assess the product quality and suggest improvements.

Explanation of key features:

Jinja2 tag

Purpose

{% set var = value %}

Define reusable local variables

{{ variable }}

Insert dynamic data

{% if condition %} ... {% endif %}

Conditionally include text

{% for item in list %}

Iterate over sequences

Result (rendered prompt snippet)

If your module data contains a product called “Alpha Drill” with two listed defects, the rendered prompt will look like:

CODE
Product: Alpha Drill
Type: Power Tool
Price: 199 €

Known defects:
- Battery overheats
- Switch gets stuck

Please assess the product quality and suggest improvements.

And because the system prompt defines a JSON output schema, the LLM might reply with something like:

CODE
{
  "quality_score": 6,
  "improvement_tips": [
    "Improve battery cooling system",
    "Use reinforced switch components"
  ]
}

JavaScript errors detected

Please note, these errors can depend on your browser setup.

If this problem persists, please contact our support.