Quick Start

Get up and running with Tracium in under 5 minutes.

1.Install the SDK

pip install tracium

2.Set Your API Key

export TRACIUM_API_KEY="sk_live_your_api_key"

3.Add One Line of Code

Choose your preferred setup method:

Option A: Auto-Instrumentation (Recommended)

The simplest approach - one line enables tracing for all supported libraries:

app.pypython
1
2
3
4
5
6
7
8
9
10
11
12
13
import tracium
# Enable automatic tracing for all LLM calls
tracium.trace()
# Now all OpenAI, Anthropic, Google AI calls are automatically traced
from openai import OpenAI
client = OpenAI()
response = client.chat.completions.create(
model="gpt-4",
messages=[{"role": "user", "content": "Hello, world!"}]
)

Option B: Explicit Initialization

For more control over the client configuration:

app.pypython
1
2
3
4
5
6
7
8
9
10
11
import tracium
# Initialize with custom settings
client = tracium.init(
api_key="sk_live_...", # Or use TRACIUM_API_KEY env var
default_agent_name="my-agent",
default_tags=["production", "v1.0"],
)
# Enable auto-instrumentation
client.trace()

4.Run Your Application

That's it! Your LLM calls are now being traced. Here's a complete example:

example.pypython
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
import tracium
from openai import OpenAI
# Initialize Tracium
tracium.trace()
# Create OpenAI client
openai_client = OpenAI()
def chat_with_gpt(message: str) -> str:
"""Send a message to GPT and get a response."""
response = openai_client.chat.completions.create(
model="gpt-4",
messages=[
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": message}
]
)
return response.choices[0].message.content
# This call is automatically traced with:
# - Input messages
# - Model used
# - Token usage
# - Response content
# - Latency
result = chat_with_gpt("What is the capital of France?")
print(result)

Manual Tracing

For custom logic or non-LLM operations, use manual traces and spans:

manual_example.pypython
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
import tracium
client = tracium.init()
# Create a trace for a complete workflow
with client.agent_trace(agent_name="customer-support") as trace:
# Span for retrieving context
with trace.span(span_type="retrieval", name="fetch_customer_data") as span:
span.record_input({"customer_id": "12345"})
customer_data = fetch_customer_data("12345")
span.record_output(customer_data)
# Span for the LLM call
with trace.span(span_type="llm", name="generate_response") as span:
span.record_input({"context": customer_data, "query": "..."})
response = generate_response(customer_data)
span.record_output({"response": response})
span.set_token_usage(
input_tokens=150,
output_tokens=200
)

Using Decorators

Decorate functions to automatically create traces and spans:

decorators.pypython
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
import tracium
from tracium import agent_trace, agent_span
client = tracium.init()
@agent_trace(client=client, agent_name="support-bot")
def handle_support_request(request: str):
context = retrieve_context(request)
return generate_response(context)
@agent_span(span_type="retrieval", name="retrieve_context")
def retrieve_context(query: str):
# Your retrieval logic
return {"relevant_docs": [...]}
@agent_span(span_type="llm", name="generate_response")
def generate_response(context: dict):
# Your LLM call
return "Response based on context"

Web Framework Integration

Tracium automatically instruments popular web frameworks:

FastAPI

fastapi_app.pypython
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
import tracium
from fastapi import FastAPI
from openai import OpenAI
# Initialize Tracium first
tracium.trace()
app = FastAPI()
openai_client = OpenAI()
@app.post("/chat")
async def chat(message: str):
# This entire request is traced, including the LLM call
response = openai_client.chat.completions.create(
model="gpt-4",
messages=[{"role": "user", "content": message}]
)
return {"response": response.choices[0].message.content}

Flask

flask_app.pypython
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
import tracium
from flask import Flask, request
from openai import OpenAI
tracium.trace()
app = Flask(__name__)
openai_client = OpenAI()
@app.route("/chat", methods=["POST"])
def chat():
message = request.json["message"]
response = openai_client.chat.completions.create(
model="gpt-4",
messages=[{"role": "user", "content": message}]
)
return {"response": response.choices[0].message.content}

Next Steps