Generate LLM Response
Use this action when you want a workflow step or agent action to send a prompt directly to an LLM and reuse the result in later steps.
Best for
- Summaries, classifications, rewrites, and extraction tasks
- Turning raw API data into natural language
- Producing structured output that later steps can read reliably
Main fields
| Field | What it does |
|---|---|
| Advanced mode | Switches between a simple prompt and a fully configured message-based request |
| Platform | Chooses the LLM provider |
| Model | Chooses the specific model to run |
| API key | Lets you override the default key when needed |
| Text | Simple prompt field for basic use cases |
| Messages | Advanced prompt format with multiple roles and content types |
| Enable structured outputs | Lets you define a schema so the model returns predictable JSON |
| Temperature | Controls how predictable or creative the output is |
| Max tokens | Limits the size of the answer |
What later steps can use
The response is usually available at:
step(N).choicesstep(N).choices[0].message.content
If you enable structured outputs, the content is intended to be easier to reuse in conditions, updates, and message templates.
Tips
- Use the simple Text field when you only need one prompt.
- Switch to Advanced mode when you need multiple messages, images, audio, or files.
- Use structured output when later steps need stable keys like
category,score, orsummary. - Debug the step before you build follow-up conditions on the result.