Testing and Debugging
- How to use the Debug button to test pre-actions
- How to test your agent in a live conversation
- How to use log levels and logs to diagnose issues
Before connecting your AI agent to a production channel, it is important to test it thoroughly. AutoTalk provides several tools for testing and debugging agents at different levels.
Debugging pre-actions
If your LLM agent uses pre-actions (configured on the Acoes tab), you can test them without running a full conversation:
- Open the agent and go to the Acoes (Actions) tab.
- Click the "Depurar tudo" (Debug all) button.
- AutoTalk runs every pre-action step in sequence and shows you the result of each step, including any data produced or errors encountered.
This lets you verify that each step works correctly before the agent goes live. If a step fails, you can fix it and click Debug all again immediately.
Testing with live conversations
The most realistic way to test an agent is to assign it to a channel and send it messages as a customer would:
- Assign the agent to a test channel (or a secondary number/account you control).
- Send messages to that channel from a personal phone or browser.
- Observe how the agent responds in the AutoTalk inbox.
- In the conversation view, look for the bot toggle -- this allows you to enable or disable the agent on a per-conversation basis. Use it to switch between automated and manual responses during testing.
What to test
- Common questions: Ask the types of questions your real customers ask most often.
- Edge cases: Try ambiguous, off-topic, or multi-part questions to see how the agent handles them.
- Boundary tests: Ask about topics the agent should decline (to verify your system messages set proper guardrails).
- Tool usage: If the agent has tools configured, trigger scenarios that should cause tool calls and verify the results.
- Long conversations: Continue a conversation over many exchanges to check that context is maintained and the agent does not lose track.
Using log levels for debugging
Every agent has a Nivel do Log (Log Level) setting on its configuration form. During testing and debugging, set this to a higher verbosity level to capture more detail about what the agent is doing internally.
Once the agent is stable and running in production, you can lower the log level to reduce noise and storage usage. See Viewing Logs for details on where to find and read the logs.
Common issues and fixes
| Issue | Possible cause | Fix |
|---|---|---|
| Agent gives inaccurate information | System messages are too vague or missing key facts | Add more specific details to the system messages on the Messages tab |
| Agent does not use its tools | Tool descriptions are unclear or the tool choice is misconfigured | Review tool descriptions on the Tools tab and ensure tool choice is set to "auto" |
| Responses are too long or rambling | Temperature is too high or max tokens is not set | Lower the temperature on the General tab and set a max token limit |
| Agent responds to topics it should avoid | Missing guardrails in system messages | Add explicit "do not" instructions to the system messages |
| Pre-actions fail silently | A step has incorrect configuration | Use the "Depurar tudo" (Debug all) button on the Actions tab to identify the failing step |
| Agent does not respond at all | The agent is not assigned to the channel, or the bot toggle is off | Check channel assignment and verify the bot toggle is enabled in the conversation |
Increase the log level during active development and testing so you can see full details of each interaction. Once the agent is performing well, reduce the log level to keep your logs manageable.