Skip to main content
Agents give your team the power to build AI-driven automations that reason over user context, make decisions, and execute tools — all inside of Flywheel. Let’s dive in to understand the basics and get you started on building your first agent.

What are agents?

Agents are AI-powered automations that go beyond fixed step-by-step workflows. Instead of following a predetermined path, agents use a language model to evaluate context, choose which tools to call, and decide how to act — all in real time. They’re ideal when the right action depends on nuanced user data, behavioral patterns, or external information that can’t be captured with simple if/else branching. Whether you’re looking to automate personalized outreach, triage customer risk, enrich user profiles with web data, or handle complex onboarding sequences, agents have you covered. From customer success to revenue operations, if you can describe the decision, an agent can make it.

Components of an agent

Understanding the components of an agent will give you a clearer picture of how to create, run, and refine your AI automations: Triggers: Defines when an agent run starts. It could be a specific user event (like a signup or purchase) or a drop-off condition (when a user fails to complete expected actions). Agent Node: The core reasoning engine. This is where you configure the model, system prompt, memory settings, and output behavior that control how the agent thinks and responds. Tools: Individual capabilities the agent can call during a run. Tools range from sending emails and Slack messages, to searching the web, creating Intercom tickets, and setting user properties. The agent decides which tools to use based on context. Prompt: The system-level instructions that guide the agent’s reasoning, tone, and decision-making. A well-crafted prompt is the most important factor in agent quality. Memory: Controls what context the agent retains across tool calls within a single run. Memory settings determine how much prior reasoning is available when the agent makes its next decision. Runs: Each execution of an agent is called a “run”. Runs produce logs, debug timelines, and tool call results that you can inspect to understand agent behavior. Test Mode: A controlled environment where you can run agents against real or sample payloads without affecting production data.

Building an agent

To build an agent, start by creating one from Automation → Agents. Here’s a step-by-step guide: Choose your trigger:

New Event Trigger

Starts a run when a specific user event occurs, like a signup or purchase

Drop-off Trigger

Starts a run when users fail to complete expected actions within a timeframe
Configure the agent node: Set your model, system prompt, memory, and output settings. The prompt is where you describe the agent’s role, goals, constraints, and the context it should consider.

Agent node preview

Agent configuration preview

Add tools: Click the + icon on the agent node to add tools. The agent will decide which tools to call based on context and your prompt instructions. Available tools include: Communication Tools:

Email

Send preconfigured emails with agent-controlled timing

Marketing Email

Send builder-defined marketing emails

One-to-One Email

Send personal emails from team members

AI Marketing Email

Generate and send AI-authored marketing emails

AI One-to-One Email

Generate and send AI-authored personal emails

Smart Message

Send context-aware messages across email and Slack
Slack Tools:

Slack Message

Send messages to Slack channels with context-aware content

Send Slack Channel Invites

Invite users to existing Slack channels

Create Slack Channel

Create dedicated Slack channels and invite users
Assignment & Management Tools:

Assign CSM

Assign Customer Success Managers to users

Round Robin

Distribute ownership fairly across team members

Set Custom Property

Write values to user custom properties
Data & Research Tools:

Find Event

Search past user events for additional context

Exa Web Search

Search the web for external context and enrichment
Content & Support Tools:

Agent Assets

Provide approved links, text, and images for agent outputs

Create Intercom Ticket

Escalate issues by creating support tickets
Configure tool inputs: Each tool has inputs that you can set as fixed values in the builder, or leave for the agent to determine at runtime. Use the tool prompt field to guide when and how the agent should use each tool. Test your agent: Use the Test tab to run your agent against real or sample payloads. Select a target user, choose or edit an event payload, and click Run test to see how the agent behaves. Publish: Click Publish changes to take your agent live. Use the Live toggle to control whether production events trigger the agent. Pause: Toggle the Live switch off to stop the agent from processing production events while keeping the published configuration intact.

Builder tabs

Editor

Configure the selected trigger, agent node, or tool node. This is your primary workspace for building and iterating on agent behavior.

Runs

Inspect historical runs with full logs and debug timelines. Each run shows the agent’s reasoning, tool calls, inputs/outputs, and any errors encountered.

Test

Run controlled tests with selected users and event payloads before going live. Test mode writes a test run you can inspect in the Runs tab.

Publishing and lifecycle

Agent drafts and published state are separated to give you safe iteration:
  • Publish changes writes the current draft graph as the next live version
  • Discard changes resets draft state back to the last published version
  • Live toggle controls whether published logic is active for production events
Publishing and going live are separate actions. You can publish a new version without enabling it for production traffic.

Important things to know

Prompt quality matters most: The system prompt is the single biggest lever for agent quality. Be specific about the agent’s role, goals, constraints, tone, and what information to consider. Test before going live: Always test your agent with representative users and edge-case payloads. Check the debug timeline to verify tool selection, reasoning quality, and output correctness. Tool prompts guide selection: Each tool has its own prompt field. Use it to tell the agent when this specific tool should (and should not) be called. This prevents unnecessary tool usage and improves decision quality. Monitor and iterate: Review run history regularly. Look for unexpected tool calls, poor reasoning, or missed opportunities. Refine your prompts and tool configurations based on real run data.

Next steps