Agentic Workgroups

Agent Configuration

Create and configure agents within workgroups with LLM settings, tools, and integrations.

Agent Configuration interface

Workgroups can be created with Manual Setup or Create with AI; both end in the same configuration screens for agents after the workgroup exists.

Create Workgroup

A workgroup is the container that holds your agents.

  1. Navigate to the Main Dashboard and click Create Workgroup
  2. Enter Workgroup Name and Description
  3. Choose architecture:
    • Single-Agent: Standalone agent for focused tasks
    • Multi-Agent: Multiple agents working collaboratively (requires a supervisor agent)
  4. Click Create

Note: Workgroup architecture cannot be changed after creation. To switch types, create a new workgroup and migrate your agent configurations.

Agent Configuration

1. Agent Information

  • Agent Name (required): Unique identifier. No special characters-use letters, numbers, spaces, and underscores only.
  • Agent Description (required): Purpose and responsibilities of the agent.

2. LLM Configuration

LLM Configuration interface

Provider Selection: Anthropic, OpenAI, Azure OpenAI, Google Vertex AI, Together AI

Model Selection: Choose from available models or select Dynamic Model to specify via {{variable_name}} format.

Dynamic Variables: Use {{variable_name}} format for runtime values:

  • {{api_key}} - API key
  • {{model_name}} - Model name (required for Dynamic Model)
  • {{llmConfig}} - JSON object with LLM parameters

Advanced Settings (click gear icon):

LLM Advanced Settings
SettingDescription
TemperatureResponse randomness (0+)
Top-PNucleus sampling (0-1)
Reasoning EffortModel reasoning effort (0-1)
VerbosityResponse verbosity (0-1)
Reasoning SummarySummary level (0-1)

Leave blank to use model defaults.

Provider-Specific Fields:

  • Google Vertex AI: Service Account Info ({{service_account_credentials}}), Project, Location
  • Azure OpenAI: Azure Endpoint, API Version, Azure Deployment

3. LLM Guards

Security mechanisms for input/output validation:

LLM Guards
GuardPurpose
Ban SubstringsBlock specific text patterns
ToxicityDetect harmful content
Prompt InjectionDetect malicious prompts
Invisible TextDetect hidden characters
Task CompletionVerify task completion

Guards can be configured as Input Scanner or Output Scanner.

4. Agent Instructions

  • System Prompt (required): Instructions defining agent behavior. Supports dynamic variables with {{variable_name}} format.
  • Is Supervisor Agent: Enable for multi-agent hierarchy coordination.
  • Max Iterations: Limit execution cycles (default: 10).
  • Output Type: Text or JSON.

5. Tools

Add tools for the agent to perform tasks. Select from available enterprise integrations.

See Tools Overview for the full list of 50+ available tools.

6. MCP Clients (Pro)

Add MCP clients to extend agent capabilities with custom integrations.

  1. Click Add MCP Client
  2. Select from configured MCP clients
  3. Choose which tools to include/exclude
MCP Clients interface

See MCP Clients for setup and configuration.

7. RAG Knowledge Base (Pro)

Connect knowledge bases for retrieval-augmented generation.

  1. Select RAG Knowledge Base from dropdown
  2. Configure: description, retrieval type, threshold, chunk count
RAG Knowledge Base configuration

See Knowledge Base for managing document collections.

Best Practices

  • Agent Names: Use descriptive names reflecting the agent's role
  • System Prompts: Be specific, include examples, define expected output format
  • Guards: Add appropriate input/output validation
  • Iterations: Set reasonable limits to prevent infinite loops