This is a collection of examples and quickstarts for using the experimental Gemini 2.0 Flash model.
To learn about what’s new in the 2.0 model release and the new Google GenAI SDKs, check out the Gemini 2.0 model page. To start experimenting with the model now, head to Google AI Studio for prompting or to the Multimodal Live API demo to try the new Live capabilities.
Explore Gemini 2.0’s capabilities through the following notebooks using Google Colab.
- Getting Started - A comprehensive overview using the GenAI SDK
- Live API starter - Overview of the Multimodal Live API using the GenAI SDK
- Live API tool use - Overview of tool use in the Live API with the GenAI SDK
- Plotting and mapping - Demo showing search, code and function calling with the Live API
- Search tool - Quick start using the Search tool with the unary and Live APIs in the GenAI SDK
- Spatial understanding - Comprehensive overview of 2D spatial understanding capabilities with the GenAI SDK
- Spatial understanding (3D) - Comprehensive overview of 3D spatial understanding capabilities with the GenAI SDK
- Video understanding - Comprehensive overview Gemini 2.0 video understanding capabilities with the GenAI SDK
- Gemini 2.0 Flash Thinking - Introduction to the experimental Gemini 2.0 Flash Thinking model with the GenAI SDK
Or explore on your own local machine.
- Live API starter script - A locally runnable Python script using GenAI SDK that supports streaming audio + video (camera or screen) in and audio from your machine
Also find websocket-specific examples in the websockets
directory.