Google Gemini Integration
Automatic tracing for Google Generative AI (Gemini) API calls.
Quick Start
Prereq: set TRACIUM_API_KEY (see Installation).
12345678910111213import tracium# Enable auto-instrumentationtracium.trace()
# Import clients after enabling tracingimport google.generativeai as genai
# Configure Google AIgenai.configure(api_key="your-api-key")
# Use Gemini normally - all calls are tracedmodel = genai.GenerativeModel("gemini-pro")response = model.generate_content("Explain AI in simple terms.")What Gets Captured
- Input - Prompts and content sent to Gemini
- Model - gemini-pro, gemini-pro-vision, etc.
- Output - Generated response
- Token usage - Prompt and completion tokens
- Safety ratings - Content safety scores
- Latency - API call duration
Chat Sessions
1234567891011import traciumtracium.trace()import google.generativeai as genaigenai.configure(api_key="your-api-key")
model = genai.GenerativeModel("gemini-pro")chat = model.start_chat(history=[])
# Each message in the chat is tracedresponse = chat.send_message("Hi! Tell me about yourself.")response = chat.send_message("What can you help me with?")Multimodal (Vision)
1234567891011121314151617import traciumfrom PIL import Image
tracium.trace()import google.generativeai as genaigenai.configure(api_key="your-api-key")
model = genai.GenerativeModel("gemini-pro-vision")
# Load an imageimage = Image.open("image.jpg")
# Multimodal calls are tracedresponse = model.generate_content([ "What's in this image?", image])Streaming
123456789101112131415import traciumtracium.trace()import google.generativeai as genaigenai.configure(api_key="your-api-key")
model = genai.GenerativeModel("gemini-pro")
# Streaming is supportedresponse = model.generate_content( "Write a long story about space exploration.", stream=True)
for chunk in response: print(chunk.text, end="")