Automatic Tracing for LLM Applications
Add observability to your AI applications with one line of code. Tracium automatically traces OpenAI, Anthropic, Google AI, LangChain, and more.
Quick Install
pip install traciumOne-Line Setup
Add tracing to your entire application with a single function call:
1234567891011121314import tracium # Initialize with auto-instrumentationtracium.trace(api_key="sk_live_...") # That's it! All LLM calls are now tracedfrom openai import OpenAIclient = OpenAI() response = client.chat.completions.create( model="gpt-4", messages=[{"role": "user", "content": "Hello!"}])# ^ This call is automatically tracedWhy Tracium?
Zero Configuration
Auto-instrumentation for OpenAI, Anthropic, Google AI, LangChain, and LangGraph out of the box.
Web Framework Support
Built-in support for FastAPI, Flask, Django, and Celery with automatic request tracing.
Fail-Safe Design
SDK errors never break your application. All operations are non-blocking and fail gracefully.
Token Tracking
Automatic token usage tracking for all supported LLM providers. Monitor costs in real-time.
Context Propagation
Automatic context propagation across threads and async boundaries. No manual work required.
Queue Management
Background queue with configurable settings to prevent event loss under high load.
Supported Integrations
Tracium automatically detects and instruments these libraries:
Manual Tracing
For more control, use explicit traces and spans:
1234567891011121314151617import tracium client = tracium.init(api_key="sk_live_...") with client.agent_trace(agent_name="support-bot") as trace: # Record a planning span with trace.span(span_type="plan", name="analyze_request") as span: span.record_input({"query": "How do I reset my password?"}) plan = analyze_user_request(query) span.record_output({"plan": plan}) # Record an LLM call span with trace.span(span_type="llm", name="generate_response") as span: span.record_input({"prompt": plan}) response = generate_response(plan) span.record_output({"response": response}) span.set_token_usage(input_tokens=150, output_tokens=200)