ppms.llm
ppms.llm provides:
A prompt templating abstraction (
PromptTemplate) that:Loads template metadata (text, system prompt, temperature, response format) from the customizing configuration.
Renders Jinja2 templates using module context read via ModuleReader.
Calls an LLM provider (currently OpenAI, model
gpt-4o) to obtain completions.Persists the full request/response cycle to table 537 for traceability.
A lookup function (
get_prompt) that resolves a template from a human-friendlypython_id.
Classes
PromptTemplate
A prompt runner that loads a template by UUID, expands Jinja2 variables against a module’s data, manages a chat-like message list, invokes the LLM, and stores the result.
Tip: The templates (
text,system_prompt) support Jinja2 and have access to a globalppmsobject for lookups.
Properties
Attribute | Type | Getter | Setter | Description |
|---|---|---|---|---|
|
| ✓ | ✗ | Identifier of the template. |
|
| ✓ | ✗ | LLM provider; currently |
|
| ✓ | ✗ | LLM model name; default |
|
| ✓ | ✗ | Jinja2 user prompt template. |
|
| ✓ | ✗ | Jinja2 system prompt template. |
|
| ✓ | ✗ | Sampling temperature passed to the LLM. |
|
| ✓ | ✗ | Dotted path to a response class (e.g., |
|
| ✓ | ✗ | Chat message history sent to the LLM. |
Methods
Function | Parameters | Return Value | Description |
|---|---|---|---|
PromptTemplate.execute_with_context | module: A PLANTA module object to read for expanding the prompt. |
| Executes the template against the given module context and returns the LLM response object. |
Functions
Function | Parameters | Return Value | Description |
|---|---|---|---|
get_prompt(python_id) | python_id: The Python ID of a prompt (DI 067116) |
| Fetches the associated
|
Examples
1) Execute a template and get raw response
from ppms.llm import get_prompt
from ppms import ppms
module = ppms.get_target_module()
prompt = get_prompt("your_prompt_python_id")
if not prompt:
raise RuntimeError("Prompt not found")
response = prompt.execute_with_context(module=module)
if response is not None:
ppms.ui_message_box(response.choices[0].message.content)
else:
ppms.ui_message_box("No response (check llm_response field in table 537 for error details).")
2) Typed, reliable responses with pydantic
Use a Pydantic model as your response_format so the LLM must return valid, parseable JSON you can trust. Here’s an example schema using projects/tasks:
# customer/pm/schemas.py
from enum import Enum
from pydantic import BaseModel
from typing import List, Optional
class LinkType(str, Enum):
FS = "finish-to-start"
SS = "start-to-start"
FF = "finish-to-finish"
SF = "start-to-finish"
class TaskLink(BaseModel):
dependency_type: LinkType
predecessor_id: str
class PlanTask(BaseModel):
id: str
title: str
duration_days: int
parent_id: Optional[str] = None
dependencies: List[TaskLink] = []
class ProjectPlan(BaseModel):
name: str
tasks: List[PlanTask]
After setting DI 067141 for your prompt to customer.pm.schemas.ProjectPlan the response from executing the prompt will now be formatted according to the schema or None will be returned.
3) Jinja2 templating in prompts
The prompt templates use Jinja2 to dynamically inject module data and control the text flow.
You can include conditional logic, formatting, and even system variables (ppms.uvar_get) to make prompts flexible and context-aware.
Let’s say you want to evaluate the quality of a product loaded in your module.
System prompt:
You are a senior quality control engineer.
Analyze product information and provide an overall quality rating.
Output JSON with:
- quality_score (1–10)
- improvement_tips (list of strings)
Language: {{ ppms.uvar_get('@19') }}
User prompt:
{% set product = data.product[0] %}
Product: {{ product.name }}
Type: {{ product.category }}
Price: {{ product.price }} €
{% if product.defects %}
Known defects:
{% for d in product.defects %}
- {{ d.description }}
{% endfor %}
{% endif %}
Please assess the product quality and suggest improvements.
Explanation of key features:
Jinja2 tag | Purpose |
|---|---|
| Define reusable local variables |
| Insert dynamic data |
| Conditionally include text |
| Iterate over sequences |
Result (rendered prompt snippet)
If your module data contains a product called “Alpha Drill” with two listed defects, the rendered prompt will look like:
Product: Alpha Drill
Type: Power Tool
Price: 199 €
Known defects:
- Battery overheats
- Switch gets stuck
Please assess the product quality and suggest improvements.
And because the system prompt defines a JSON output schema, the LLM might reply with something like:
{
"quality_score": 6,
"improvement_tips": [
"Improve battery cooling system",
"Use reinforced switch components"
]
}