Skip to main content

Generate LLM Response

Use this action when you want a workflow step or agent action to send a prompt directly to an LLM and reuse the result in later steps.

Best for

  • Summaries, classifications, rewrites, and extraction tasks
  • Turning raw API data into natural language
  • Producing structured output that later steps can read reliably

Main fields

FieldWhat it does
Advanced modeSwitches between a simple prompt and a fully configured message-based request
PlatformChooses the LLM provider
ModelChooses the specific model to run
API keyLets you override the default key when needed
TextSimple prompt field for basic use cases
MessagesAdvanced prompt format with multiple roles and content types
Enable structured outputsLets you define a schema so the model returns predictable JSON
TemperatureControls how predictable or creative the output is
Max tokensLimits the size of the answer

What later steps can use

The response is usually available at:

  • step(N).choices
  • step(N).choices[0].message.content

If you enable structured outputs, the content is intended to be easier to reuse in conditions, updates, and message templates.

Tips

  • Use the simple Text field when you only need one prompt.
  • Switch to Advanced mode when you need multiple messages, images, audio, or files.
  • Use structured output when later steps need stable keys like category, score, or summary.
  • Debug the step before you build follow-up conditions on the result.