Getting Started with Gemini API in Google Colab

API call to AI Model

To use the Gemini models via the google.generativeai library in Colab, we need an API key from Google AI Studio. Visit Google AI Studio – API Keys, sign in with your Google account, and generate a new API key if you haven’t already. Once generated, this key can be securely stored using Colab’s secrets manager, allowing us to authenticate and interact with Gemini models programmatically.




Gemini AI with Python on Google Colab using API | Complete Beginner's Guide #colab #aitools


Listing Available Models from Google Generative AI

This script uses the google.generativeai library to list all available models supported by our API key. It retrieves the key from Colab’s userdata securely and prints model names and their supported generation methods.

import google.generativeai as genai
from google.colab import userdata

try:
  GOOGLE_API_KEY = userdata.get('GOOGLE_API_KEY')
  genai.configure(api_key=GOOGLE_API_KEY)

  # List available models
  for m in genai.list_models():
    print(f"Name: {m.name}")
    print(f"Supported generation methods: {m.supported_generation_methods}")
    print("-" * 20)

except Exception as e:
  print(f"An error occurred: {e}")
  print("Check your API key with the name 'GOOGLE_API_KEY'.")
Part of the output is here
Name: models/embedding-gecko-001
Supported generation methods: ['embedText', 'countTextTokens']
--------------------
Name: models/gemini-1.5-pro-latest
Supported generation methods: ['generateContent', 'countTokens']
--------------------
Name: models/gemini-1.5-pro-002
Supported generation methods: ['generateContent', 'countTokens', 'createCachedContent']
--------------------

Text Generation with Gemini API in Google Colab

This script uses the google.generativeai library to initialize the gemini-1.5-flash-latest model and generate a short poem about the stars. The API key is accessed via userdata.get(), and the generate_content() method handles the text generation task.

import google.generativeai as genai
from google.colab import userdata

try:
  GOOGLE_API_KEY = userdata.get('GOOGLE_API_KEY')
  genai.configure(api_key=GOOGLE_API_KEY)

  # Initialize the model that supports generateContent
  # You can choose another model from the list if you prefer, e.g., 'gemini-1.5-pro-latest'
  model = genai.GenerativeModel('gemini-1.5-flash-latest')

  # Generate content
  response = model.generate_content("Write a short poem about the stars.")
  print(response.text)

except Exception as e:
  print(f"An error occurred: {e}")
  print("Check your API key in the name 'GOOGLE_API_KEY'.")
Output is here
Distant suns, a scattered gleam,
Across the velvet, midnight stream.
A million points of silver light,
Igniting darkness, pure and bright.
Silent stories, ages old,
In cosmic dust, their tales unfold.

Generating Content Dynamically Based on User Prompt

This script uses the google.generativeai library and allows the user to enter a custom prompt for generating text with the gemini-1.5-flash-latest model. The API key is retrieved via userdata.get(), and the response is printed dynamically based on user input.

import google.generativeai as genai
from google.colab import userdata

try:
  GOOGLE_API_KEY = userdata.get('GOOGLE_API_KEY')
  genai.configure(api_key=GOOGLE_API_KEY)

  # Initialize the model that supports generateContent
  # You can choose another model from the list if you prefer,
  # e.g., 'gemini-1.5-pro-latest'
  model = genai.GenerativeModel('gemini-1.5-flash-latest')

  # Get prompt from user input
  user_prompt = input("Enter your prompt for the model: ")

  # Generate content based on user input
  response = model.generate_content(user_prompt)
  print(response.text)

except Exception as e:
  print(f"An error occurred: {e}")
  print("Please check your API key.")
Input prompt
Enter your prompt for the model: Tell me why flight path are not straight line.
Output ( Part only )
Flight paths aren't straight lines for a variety of reasons, all boiling down to efficiency, safety, and regulatory compliance:

* **Jet Streams:**  These high-altitude air currents can significantly impact flight time and fuel consumption.  Pilots often plan routes to take advantage of tailwinds (flying with the jet stream) and avoid headwinds (flying against it). This results in a curved path, even if the ground distance appears longer.

💻 Local Python Script for VS Code or any other platform.

We have to keep the API Key in local .env file in same Directory and read from the script.

🔐 .env File (Same Directory)

  • Do not add quotes around the key value.
  • If you're using Git, add .env to your .gitignore to avoid exposing your API key.
GOOGLE_API_KEY=your_actual_api_key_here 
Here is the code to get the API key from local file and configure the same with the Model.
import os
import google.generativeai as genai
from dotenv import load_dotenv

# Load API key from .env file
load_dotenv()
GOOGLE_API_KEY = os.getenv("GOOGLE_API_KEY")

try:
    # Configure Gemini API
    genai.configure(api_key=GOOGLE_API_KEY)

    # Load and initialize the model
    model = genai.GenerativeModel('gemini-1.5-flash-latest')

    # Get prompt from user input
    user_prompt = input("Enter your prompt for the model: ")

    # Generate content based on user input
    response = model.generate_content(user_prompt)
    print(response.text)

except Exception as e:
    print(f"An error occurred: {e}")
    print("Please check your API key.")

Chat Mode


Gemini AI Chat Mode with Python in VS Code | Interactive Prompt Handling Tutorial

In this script, we use chat mode with the Gemini AI model, which allows for multi-turn conversations where the context is preserved across messages. Unlike single-prompt interactions, chat mode enables the model to understand follow-up questions based on earlier inputs—so you don’t need to repeat the full context each time. This makes it ideal for building conversational tools, assistants, or interactive AI applications.
import os
import google.generativeai as genai
from dotenv import load_dotenv

# Load API key from .env file
load_dotenv()
GOOGLE_API_KEY = os.getenv("GOOGLE_API_KEY")

try:
    # Configure Gemini API
    genai.configure(api_key=GOOGLE_API_KEY)

    # Initialize chat model
    model = genai.GenerativeModel('gemini-1.5-flash-latest')
    chat = model.start_chat()

    print("🤖 Gemini AI Chat (type 'exit' to quit)\n")

    while True:
        user_input = input("You: ")

        if user_input.lower() in ['exit', 'quit']:
            print("👋 Exiting chat. Goodbye!")
            break

        response = chat.send_message(user_input)
        print(f"Gemini: {response.text}\n")

except Exception as e:
    print(f"An error occurred: {e}")
    print("Please check your API key.")

Working without API key

The google.colab.ai library is specifically designed for use within Google Colab environments. It provides a simple interface to access state-of-the-art language models directly within Colab. Therefore, it is not intended for use outside of the Colab platform.

from google.colab import ai

# Use Colab AI to ask a factual question and get a response
response = ai.generate_text("What is the capital of India?")

# Display the response
print(response)
List of available Models
from google.colab import ai

# List all available models accessible through Google Colab's AI interface
ai.list_models()
Using a specific Model
# @title Choose a different model
from google.colab import ai

# Generate response using a specific Gemini model
response = ai.generate_text(
    "What is the capital of England",
    model_name='google/gemini-2.5-flash-lite'
)

print(response)
ai.generate_text(..., stream=True) is described as streaming real-time text generation.
Simple streaming example.
from google.colab import ai

# Request AI to generate a short story and stream the output as it's being created
stream = ai.generate_text(
    "Tell me a short story.",
    stream=True
)

for text in stream:
    print(text, end='')

Using Excel table as input to AI

We can use the excel file data ( student.xlsx ) as string input to AI model. Here we will be using Pandas to_string() to convert the data to string to give as input to the AI model.
# get the sample student data from the url
!wget https://www.plus2net.com/python/download/student.xlsx
Creating the Pandas DataFrame.
import pandas as pd  
df=pd.read_excel('student.xlsx')  
df.head()
Generating the string by using Pandas df_tostring()
df_string = df.to_string()  
print(df_string)
Passing the string to AI model and getting output.
from google.colab import ai

question = "From the following student data, which student has the highest mark? Please only provide the student's name.\n\n" + df_string

ai_response = ai.generate_text(question, model_name='google/gemini-2.5-flash')
print(ai_response)
Ask for full details of the student who got the highest mark.
question = "Give me the name, class , mark for the  student who has the highest mark.\n\n " + df_string
ai_response = ai.generate_text(question, model_name='google/gemini-2.5-flash')
print(ai_response)
It does not maintain the historic information, by not giving the df_string it can't reply by using previous posted information.
question = "Give me the name, class , mark for the  student who has the highest mark.\n\n"
#question="Give me the name, class , mark for the  student who has the highest mark.\n\n" +df_string

ai_response = ai.generate_text(question, model_name='google/gemini-2.5-flash')
print(ai_response)

From Python List to Dictionary by using google.colab.ai

We can't pass the list to our AI tool, we have to convert the List to string and then use the same while sending the question.
from google.colab import ai

countries = ['India', 'USA', 'Japan']

# Convert list to a comma-separated string
my_str = ",".join(map(str, countries))

# Create a prompt asking AI to generate a dictionary with country names as keys and capitals as values
question = "Create one Python dictionary by using the list names as key   \
        and capitals of the countries as values .\n\n" + my_str

ai_response = ai.generate_text(
    question,
    model_name='google/gemini-2.5-flash'
)

print(ai_response)

Limitations of google.colab.ai

  • No inherent chart generation: The google.colab.ai module itself does not provide built-in functionalities for creating charts or data visualizations. You would typically use libraries like matplotlib or seaborn with pandas for such tasks.
  • Stateless interactions: It does not maintain conversational history or remember previous information unless explicitly provided in each new prompt. Each ai.generate_text call is independent. As you observed, if you don't include the df_string again, it cannot answer questions based on previously provided data.
  • Limited to text input/output: The ai.generate_text function is designed for text-based interactions. It cannot directly process or generate images. While some advanced models support multimodal inputs, the current generate_text function focuses on textual data.
  • Context window limitations: LLMs have a finite context window, meaning there's a limit to how much information (text, conversation history) they can process in a single turn.

Frequently Asked Questions

Q1: What is the purpose of the google-generativeai library?

This library allows developers to interact with Google's Gemini AI models using Python to generate content from text or image prompts.

Q2: How do I install the required packages?

You can install the package using pip install google-generativeai. For handling environment variables, use pip install python-dotenv.

Q3: How do I keep my API key secure?

Store your API key in a .env file and use the python-dotenv package to load it securely in your script.

Q4: Can I send follow-up messages to the model?

Yes, by using the start_chat() method, you can maintain context in a conversation and send follow-up messages using send_message().

Q5: Which Gemini model should I use?

You can use 'gemini-1.5-pro-latest' for full capabilities or 'gemini-1.5-flash-latest' for faster, lightweight responses depending on your use case.

Q6: Can this code run outside of Google Colab?

Yes, the code runs on your local machine by replacing Colab-specific parts and installing the necessary packages via pip.

Conclusion

Integrating Gemini AI with Python opens up powerful possibilities for generating and interacting with natural language content directly from your code. Whether you're creating simple prompts or building full chat-based applications, Gemini provides a flexible API that works seamlessly in environments like Google Colab or your local system. By securely managing your API key and choosing the right model variant, you can start building intelligent, context-aware applications with minimal setup. As you explore further, you’ll find this integration to be a valuable addition to your Python development toolkit.
Next: Working with Gemini AI Image »

Inside AI Tools: The API Workflow (Restaurant Analogy) #aitools


Subhendu Mohapatra — author at plus2net
Subhendu Mohapatra

Author

🎥 Join me live on YouTube

Passionate about coding and teaching, I publish practical tutorials on PHP, Python, JavaScript, SQL, and web development. My goal is to make learning simple, engaging, and project‑oriented with real examples and source code.



Subscribe to our YouTube Channel here



plus2net.com







Python Video Tutorials
Python SQLite Video Tutorials
Python MySQL Video Tutorials
Python Tkinter Video Tutorials
We use cookies to improve your browsing experience. . Learn more
HTML MySQL PHP JavaScript ASP Photoshop Articles Contact us
©2000-2025   plus2net.com   All rights reserved worldwide Privacy Policy Disclaimer