Skip to main content

Testing and Debugging

What you'll learn
  • How to use the Debug button to test pre-actions
  • How to test your agent in a live conversation
  • How to use log levels and logs to diagnose issues

Before connecting your AI agent to a production channel, it is important to test it thoroughly. AutoTalk provides several tools for testing and debugging agents at different levels.

Debugging pre-actions

If your LLM agent uses pre-actions (configured on the Acoes tab), you can test them without running a full conversation:

  1. Open the agent and go to the Acoes (Actions) tab.
  2. Click the "Depurar tudo" (Debug all) button.
  3. AutoTalk runs every pre-action step in sequence and shows you the result of each step, including any data produced or errors encountered.

This lets you verify that each step works correctly before the agent goes live. If a step fails, you can fix it and click Debug all again immediately.

Testing with live conversations

The most realistic way to test an agent is to assign it to a channel and send it messages as a customer would:

  1. Assign the agent to a test channel (or a secondary number/account you control).
  2. Send messages to that channel from a personal phone or browser.
  3. Observe how the agent responds in the AutoTalk inbox.
  4. In the conversation view, look for the bot toggle -- this allows you to enable or disable the agent on a per-conversation basis. Use it to switch between automated and manual responses during testing.

What to test

  • Common questions: Ask the types of questions your real customers ask most often.
  • Edge cases: Try ambiguous, off-topic, or multi-part questions to see how the agent handles them.
  • Boundary tests: Ask about topics the agent should decline (to verify your system messages set proper guardrails).
  • Tool usage: If the agent has tools configured, trigger scenarios that should cause tool calls and verify the results.
  • Long conversations: Continue a conversation over many exchanges to check that context is maintained and the agent does not lose track.

Using log levels for debugging

Every agent has a Nivel do Log (Log Level) setting on its configuration form. During testing and debugging, set this to a higher verbosity level to capture more detail about what the agent is doing internally.

Once the agent is stable and running in production, you can lower the log level to reduce noise and storage usage. See Viewing Logs for details on where to find and read the logs.

Common issues and fixes

IssuePossible causeFix
Agent gives inaccurate informationSystem messages are too vague or missing key factsAdd more specific details to the system messages on the Messages tab
Agent does not use its toolsTool descriptions are unclear or the tool choice is misconfiguredReview tool descriptions on the Tools tab and ensure tool choice is set to "auto"
Responses are too long or ramblingTemperature is too high or max tokens is not setLower the temperature on the General tab and set a max token limit
Agent responds to topics it should avoidMissing guardrails in system messagesAdd explicit "do not" instructions to the system messages
Pre-actions fail silentlyA step has incorrect configurationUse the "Depurar tudo" (Debug all) button on the Actions tab to identify the failing step
Agent does not respond at allThe agent is not assigned to the channel, or the bot toggle is offCheck channel assignment and verify the bot toggle is enabled in the conversation
tip

Increase the log level during active development and testing so you can see full details of each interaction. Once the agent is performing well, reduce the log level to keep your logs manageable.

Next steps