Skip to main content

Documentation Index

Fetch the complete documentation index at: https://docs.clarifeye.ai/llms.txt

Use this file to discover all available pages before exploring further.

Overview

The Agent defines how your AI assistant reasons over your warehouse and which tools it can call. Configuration is split across three tabs: General, Knowledge Schema, and Tools. All expert artifacts (briefs, playbooks, routings) attached to your project are automatically included in the agent context.

1) General

Set the high‑level behavior and model:
  • Name: display name for the agent
  • Description: optional summary for collaborators
  • Additional instructions: steering instructions the model should always follow
  • Conversation starter: optional first prompt shown to users
  • Model: choose the base model and reasoning level

2) Knowledge Schema

Control how much of the knowledge graph and tag hierarchies the agent can access:
  • Project brief: toggle injection of the project brief into the system context
  • Knowledge Graph: select whether to inject none, part, or all of the graph
  • Tags: inject all tag hierarchies or select subsets
Use these settings to constrain context to the most relevant parts of your warehouse.

3) Tools

Choose the tools the agent may call during a conversation. The list shows each tool’s name, type, and description. Remove or add tools to tailor capabilities to your use case.
Artifacts are included automatically—no extra setup is required beyond selecting the tools and schema scope.

Test in the Playground

Open the Playground tab to chat with your agent. You can inspect the steps it takes (reasoning traces when enabled) and see each tool call as it happens, which makes debugging and iteration fast.

Api usage

Get agent settings and create a playground agent

Select the most recently used agent setting and instantiate a playground agent:
agent_settings = warehouse.list_agent_settings()

def sort_key(s):
    v = s.get("last_conversation_date")
    if not v:
        return datetime.min.replace(tzinfo=timezone.utc)
    return datetime.fromisoformat(v.replace("Z", "+00:00")).astimezone(timezone.utc)

agent_settings.sort(key=sort_key, reverse=True)
agent = warehouse.get_playground_agent(agent_settings[0]["id"])

One-off question

For a single turn, create a conversation and send the message in one call:
result = agent.create_conversation_and_send_message("Your question here")
print(result["answer"])  # model's reply

Handle a multi-turn conversation

Create a conversation once, then send follow-ups to the same conversation ID:
conversation = agent.create_conversation()
answer = agent.send_message(conversation["id"], "First question")
print(answer)  # first reply

answer = agent.send_message(conversation["id"], "Follow-up question")
print(answer)  # follow-up reply

Streaming

The API supports streaming responses. Please contact us to discuss the best streaming integration pattern for your systems and deployment environment.