Anthropic Integration
Automatic tracing for Anthropic Claude API calls.
Quick Start
Prereq: set TRACIUM_API_KEY (see Installation).
123456789101112131415161718import tracium
# Enable auto-instrumentationtracium.trace()
# Import clients after enabling tracingimport anthropic
# Use Anthropic normally - all calls are tracedclient = anthropic.Anthropic()
message = client.messages.create( model="claude-3-opus-20240229", max_tokens=1024, messages=[ {"role": "user", "content": "Hello, Claude!"} ])What Gets Captured
- Input messages - The messages array sent to Claude
- System prompt - If provided
- Model - claude-3-opus, claude-3-sonnet, etc.
- Output - The response content
- Token usage - Input and output tokens
- Latency - API call duration
- Stop reason - Why generation stopped
Streaming
1234567891011121314import tracium
tracium.trace()import anthropicclient = anthropic.Anthropic()
# Streaming is fully supportedwith client.messages.stream( model="claude-3-sonnet-20240229", max_tokens=1024, messages=[{"role": "user", "content": "Write a story."}]) as stream: for text in stream.text_stream: print(text, end="", flush=True)Async Support
12345678910111213141516import traciumimport asyncio
tracium.trace()import anthropicclient = anthropic.AsyncAnthropic()
async def chat(prompt: str) -> str: message = await client.messages.create( model="claude-3-sonnet-20240229", max_tokens=1024, messages=[{"role": "user", "content": prompt}] ) return message.content[0].text
result = asyncio.run(chat("Explain quantum computing."))Tool Use
123456789101112131415161718192021222324252627282930313233import tracium
tracium.trace()import anthropicclient = anthropic.Anthropic()
# Tool definitionstools = [ { "name": "get_weather", "description": "Get the current weather", "input_schema": { "type": "object", "properties": { "location": { "type": "string", "description": "City name" } }, "required": ["location"] } }]
# Tool use is captured in the tracemessage = client.messages.create( model="claude-3-sonnet-20240229", max_tokens=1024, tools=tools, messages=[ {"role": "user", "content": "What's the weather in Paris?"} ])Multi-Turn Conversations
12345678910111213141516171819202122232425import tracium
client = tracium.init()import anthropicanthropic_client = anthropic.Anthropic()
with client.agent_trace(agent_name="claude-chat") as trace: messages = [] # First turn messages.append({"role": "user", "content": "Hi, I'm learning Python."}) response = anthropic_client.messages.create( model="claude-3-sonnet-20240229", max_tokens=1024, messages=messages ) messages.append({"role": "assistant", "content": response.content[0].text}) # Second turn - all calls are linked in the same trace messages.append({"role": "user", "content": "Can you show me a loop example?"}) response = anthropic_client.messages.create( model="claude-3-sonnet-20240229", max_tokens=1024, messages=messages )