What Is an Agent

Agents are specialized AI workers that Thallus dispatches to handle specific parts of your query. Each agent has its own set of tools, a focused system prompt, and a model configuration tuned for its job. When the orchestration pipeline creates a plan, each step is assigned to the agent best suited for the task.

Agent clusters

Agents are organized into four clusters based on what they do:

Research Data Productivity Communication
  • Research — Web search, document analysis, and deep multi-source investigation
  • Data — Database exploration, SQL/NoSQL query generation, and business intelligence analytics
  • Productivity — Email, calendars, files, project management, and task tracking via connected services
  • Communication — Messaging platforms like Slack for channel and message operations

Each cluster groups agents that share a similar domain. The planner knows which cluster an agent belongs to and uses that to match agents to plan steps. For the full list, see Available Agents.

Model tiers

Every agent is assigned a model tier that determines which LLM it uses:

Tier Purpose
Fast Speed-critical tasks like classification and integration calls
Medium Balanced tasks like research, data queries, and analysis
Large Deep analysis, complex reasoning, and synthesis

Thallus assigns faster models to simple tasks and more capable models to complex analysis. The tier system allows agents to be upgraded independently as newer models become available. Organizations using Bring Your Own Key can override the default model assignments.

The agentic loop

When an agent receives a task from the planner, it doesn't just make one LLM call. It runs an iterative tool-use loop:

Planner instruction
LLM decides action
Execute tools
Check results
Result returned

The middle three steps (LLM → Tools → Check) repeat up to the agent's iteration limit.

Here's what happens at each step:

  1. Planner instruction — The agent receives a task description plus context from the conversation board (schemas, document catalog, prior results)
  2. LLM decides action — The agent's LLM examines the task and available tools, then requests one or more tool calls
  3. Execute tools — All requested tool calls run in parallel. Results are appended to the conversation
  4. Check results — The LLM reviews tool outputs. If it needs more information, it loops back and calls more tools. If it has enough, it composes a final answer
  5. Result returned — The agent returns a structured result with its response, citations, confidence score, and metadata

Each agent has its own iteration limit, calibrated for the complexity of its typical tasks. This prevents runaway loops while giving complex tasks enough room to complete.

Agent results

Every agent returns a structured result that the orchestrator uses for evaluation and synthesis:

Agent Result
Agent web_research
Tools used web_search, fetch_and_extract
Citations 3 sources
Confidence
0.85
Execution time 4.2s

Each agent returns a structured result including its answer, source citations, and confidence assessment. The orchestrator uses these results for evaluation and synthesis.

Confidence scoring

Agents assess their confidence in each answer. Low-confidence results can trigger replanning — adding new steps to fill gaps rather than presenting an incomplete answer.

Next steps