Google Gemini Integration
Automatic tracing for Google Generative AI (Gemini) API calls.
Quick Start
123456789101112import traciumimport google.generativeai as genai # Enable auto-instrumentationtracium.trace() # Configure Google AIgenai.configure(api_key="your-api-key") # Use Gemini normally - all calls are tracedmodel = genai.GenerativeModel("gemini-pro")response = model.generate_content("Explain AI in simple terms.")What Gets Captured
- Input - Prompts and content sent to Gemini
- Model - gemini-pro, gemini-pro-vision, etc.
- Output - Generated response
- Token usage - Prompt and completion tokens
- Safety ratings - Content safety scores
- Latency - API call duration
Chat Sessions
123456789101112import traciumimport google.generativeai as genai tracium.trace()genai.configure(api_key="your-api-key") model = genai.GenerativeModel("gemini-pro")chat = model.start_chat(history=[]) # Each message in the chat is tracedresponse = chat.send_message("Hi! Tell me about yourself.")response = chat.send_message("What can you help me with?")Multimodal (Vision)
1234567891011121314151617import traciumimport google.generativeai as genaifrom PIL import Image tracium.trace()genai.configure(api_key="your-api-key") model = genai.GenerativeModel("gemini-pro-vision") # Load an imageimage = Image.open("image.jpg") # Multimodal calls are tracedresponse = model.generate_content([ "What's in this image?", image])Streaming
12345678910111213141516import traciumimport google.generativeai as genai tracium.trace()genai.configure(api_key="your-api-key") model = genai.GenerativeModel("gemini-pro") # Streaming is supportedresponse = model.generate_content( "Write a long story about space exploration.", stream=True) for chunk in response: print(chunk.text, end="")