Quick Start
Get up and running with Tracium in under 5 minutes.
1.Install the SDK
pip install tracium2.Set Your API Key
export TRACIUM_API_KEY="sk_live_your_api_key"3.Add One Line of Code
Choose your preferred setup method:
Option A: Auto-Instrumentation (Recommended)
The simplest approach - one line enables tracing for all supported libraries:
app.pypython
12345678910111213import tracium # Enable automatic tracing for all LLM callstracium.trace() # Now all OpenAI, Anthropic, Google AI calls are automatically tracedfrom openai import OpenAIclient = OpenAI() response = client.chat.completions.create( model="gpt-4", messages=[{"role": "user", "content": "Hello, world!"}])Option B: Explicit Initialization
For more control over the client configuration:
app.pypython
1234567891011import tracium # Initialize with custom settingsclient = tracium.init( api_key="sk_live_...", # Or use TRACIUM_API_KEY env var default_agent_name="my-agent", default_tags=["production", "v1.0"],) # Enable auto-instrumentationclient.trace()4.Run Your Application
That's it! Your LLM calls are now being traced. Here's a complete example:
example.pypython
12345678910111213141516171819202122232425262728import traciumfrom openai import OpenAI # Initialize Traciumtracium.trace() # Create OpenAI clientopenai_client = OpenAI() def chat_with_gpt(message: str) -> str: """Send a message to GPT and get a response.""" response = openai_client.chat.completions.create( model="gpt-4", messages=[ {"role": "system", "content": "You are a helpful assistant."}, {"role": "user", "content": message} ] ) return response.choices[0].message.content # This call is automatically traced with:# - Input messages# - Model used# - Token usage# - Response content# - Latencyresult = chat_with_gpt("What is the capital of France?")print(result)Manual Tracing
For custom logic or non-LLM operations, use manual traces and spans:
manual_example.pypython
12345678910111213141516171819202122import tracium client = tracium.init() # Create a trace for a complete workflowwith client.agent_trace(agent_name="customer-support") as trace: # Span for retrieving context with trace.span(span_type="retrieval", name="fetch_customer_data") as span: span.record_input({"customer_id": "12345"}) customer_data = fetch_customer_data("12345") span.record_output(customer_data) # Span for the LLM call with trace.span(span_type="llm", name="generate_response") as span: span.record_input({"context": customer_data, "query": "..."}) response = generate_response(customer_data) span.record_output({"response": response}) span.set_token_usage( input_tokens=150, output_tokens=200 )Using Decorators
Decorate functions to automatically create traces and spans:
decorators.pypython
12345678910111213141516171819import traciumfrom tracium import agent_trace, agent_span client = tracium.init() @agent_trace(client=client, agent_name="support-bot")def handle_support_request(request: str): context = retrieve_context(request) return generate_response(context) @agent_span(span_type="retrieval", name="retrieve_context")def retrieve_context(query: str): # Your retrieval logic return {"relevant_docs": [...]} @agent_span(span_type="llm", name="generate_response")def generate_response(context: dict): # Your LLM call return "Response based on context"Web Framework Integration
Tracium automatically instruments popular web frameworks:
FastAPI
fastapi_app.pypython
123456789101112131415161718import traciumfrom fastapi import FastAPIfrom openai import OpenAI # Initialize Tracium firsttracium.trace() app = FastAPI()openai_client = OpenAI() @app.post("/chat")async def chat(message: str): # This entire request is traced, including the LLM call response = openai_client.chat.completions.create( model="gpt-4", messages=[{"role": "user", "content": message}] ) return {"response": response.choices[0].message.content}Flask
flask_app.pypython
1234567891011121314151617import traciumfrom flask import Flask, requestfrom openai import OpenAI tracium.trace() app = Flask(__name__)openai_client = OpenAI() @app.route("/chat", methods=["POST"])def chat(): message = request.json["message"] response = openai_client.chat.completions.create( model="gpt-4", messages=[{"role": "user", "content": message}] ) return {"response": response.choices[0].message.content}