Google Gemini Integration

Automatic tracing for Google Generative AI (Gemini) API calls.

Quick Start

1
2
3
4
5
6
7
8
9
10
11
12
import tracium
import google.generativeai as genai
# Enable auto-instrumentation
tracium.trace()
# Configure Google AI
genai.configure(api_key="your-api-key")
# Use Gemini normally - all calls are traced
model = genai.GenerativeModel("gemini-pro")
response = model.generate_content("Explain AI in simple terms.")

What Gets Captured

  • Input - Prompts and content sent to Gemini
  • Model - gemini-pro, gemini-pro-vision, etc.
  • Output - Generated response
  • Token usage - Prompt and completion tokens
  • Safety ratings - Content safety scores
  • Latency - API call duration

Chat Sessions

1
2
3
4
5
6
7
8
9
10
11
12
import tracium
import google.generativeai as genai
tracium.trace()
genai.configure(api_key="your-api-key")
model = genai.GenerativeModel("gemini-pro")
chat = model.start_chat(history=[])
# Each message in the chat is traced
response = chat.send_message("Hi! Tell me about yourself.")
response = chat.send_message("What can you help me with?")

Multimodal (Vision)

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
import tracium
import google.generativeai as genai
from PIL import Image
tracium.trace()
genai.configure(api_key="your-api-key")
model = genai.GenerativeModel("gemini-pro-vision")
# Load an image
image = Image.open("image.jpg")
# Multimodal calls are traced
response = model.generate_content([
"What's in this image?",
image
])

Streaming

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
import tracium
import google.generativeai as genai
tracium.trace()
genai.configure(api_key="your-api-key")
model = genai.GenerativeModel("gemini-pro")
# Streaming is supported
response = model.generate_content(
"Write a long story about space exploration.",
stream=True
)
for chunk in response:
print(chunk.text, end="")