Anthropic Integration

Automatic tracing for Anthropic Claude API calls.

Quick Start

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
import tracium
import anthropic
# Enable auto-instrumentation
tracium.trace()
# Use Anthropic normally - all calls are traced
client = anthropic.Anthropic()
message = client.messages.create(
model="claude-3-opus-20240229",
max_tokens=1024,
messages=[
{"role": "user", "content": "Hello, Claude!"}
]
)

What Gets Captured

  • Input messages - The messages array sent to Claude
  • System prompt - If provided
  • Model - claude-3-opus, claude-3-sonnet, etc.
  • Output - The response content
  • Token usage - Input and output tokens
  • Latency - API call duration
  • Stop reason - Why generation stopped

Streaming

1
2
3
4
5
6
7
8
9
10
11
12
13
14
import tracium
import anthropic
tracium.trace()
client = anthropic.Anthropic()
# Streaming is fully supported
with client.messages.stream(
model="claude-3-sonnet-20240229",
max_tokens=1024,
messages=[{"role": "user", "content": "Write a story."}]
) as stream:
for text in stream.text_stream:
print(text, end="", flush=True)

Async Support

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
import tracium
import anthropic
import asyncio
tracium.trace()
client = anthropic.AsyncAnthropic()
async def chat(prompt: str) -> str:
message = await client.messages.create(
model="claude-3-sonnet-20240229",
max_tokens=1024,
messages=[{"role": "user", "content": prompt}]
)
return message.content[0].text
result = asyncio.run(chat("Explain quantum computing."))

Tool Use

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
import tracium
import anthropic
tracium.trace()
client = anthropic.Anthropic()
# Tool definitions
tools = [
{
"name": "get_weather",
"description": "Get the current weather",
"input_schema": {
"type": "object",
"properties": {
"location": {
"type": "string",
"description": "City name"
}
},
"required": ["location"]
}
}
]
# Tool use is captured in the trace
message = client.messages.create(
model="claude-3-sonnet-20240229",
max_tokens=1024,
tools=tools,
messages=[
{"role": "user", "content": "What's the weather in Paris?"}
]
)

Multi-Turn Conversations

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
import tracium
import anthropic
client = tracium.init()
anthropic_client = anthropic.Anthropic()
with client.agent_trace(agent_name="claude-chat") as trace:
messages = []
# First turn
messages.append({"role": "user", "content": "Hi, I'm learning Python."})
response = anthropic_client.messages.create(
model="claude-3-sonnet-20240229",
max_tokens=1024,
messages=messages
)
messages.append({"role": "assistant", "content": response.content[0].text})
# Second turn - all calls are linked in the same trace
messages.append({"role": "user", "content": "Can you show me a loop example?"})
response = anthropic_client.messages.create(
model="claude-3-sonnet-20240229",
max_tokens=1024,
messages=messages
)