Using the Groq open source LLM with Google Colab

step-by-step guide on how to use a free LLM with a cloud-based interactive notebook
2026-01-02 17:00
// updated 2026-01-03 09:28

We can use a free (open source) large language model (LLM) called Groq on a cloud-based interactive notebook called Google Colab (similar to Jupyter):

  • instead of paying OpenAI for tokens, we can use Groq for free (within limits)
  • instead of setting up Jupyter notebooks on our computers, we can use Google Colab and share the notebook with other internet people

Setup

This article assumes that we have an account with:

Google Drive

On any folder within Google Drive:

  • click on "New" on the top left corner of the sidebar
  • select "More"
  • click on "Google Colaboratory"
  • a new notebook should open

Then we can start entering code where it says "Start coding or generate with AI..."! Before we do that, however, let's make sure we have Groq setup!

Groq

Go to the API Keys page of Groq:

  • click on the "Create API Key" button to open a modal
    • for Display Name, let's give it a name we can remember such as GROQ
    • for Expiration, choose the default of "No expiration"
    • click Submit to see the API key
  • copy the API key and, as with any API key, keep a backup of it somewhere safe
  • remember to note the following:
    • the Display Name we just created
    • the API key

Go back to the Google Colab notebook:

  • on the sidebar, select the "Secrets" tab (icon with a key):
    • click "Add new secret" and with what we made in Groq:
      • under "Name", use the Display Name (e.g. GROQ)
      • under "Value", use the API key
    • grant access to the notebook by turning the switch on

Langchain

As a side note, we may need to install Langchain on our notebook; run this one line on a cell:

!pip install langchain_openai

Importing specific packages

We will import these two langchain packages since the library and package names have relatively long and unique names:

from langchain_openai import OpenAIEmbeddings, ChatOpenAI
from langchain_core.messages import HumanMessage

Importing the API key

Let's now go back to the Groq secrets we created the sub-section before last ... on a new cell, let's insert the following:

from google.colab import userdata

# the three essential variables
GROQ_API_KEY = userdata.get('GROQ')
MODEL_NAME = "openai/gpt-oss-120b"
BASE_URL = "https://api.groq.com/openai/v1"

(If using a Display Name other GROQ replace it in the snippet above!)

Note also that the model name and base url might eventually change as the models change but, as of January 2026, they should work!

Accessing the LLM

Let's connect to the LLM:

# use ChatOpenAI to access any model 
llm = ChatOpenAI(api_key=GROQ_API_KEY,
                 base_url=BASE_URL,
                 model=MODEL_NAME
)

Although we have just used a method called ChatOpenAI, we have actually added the model argument with an open-source model, gpt-oss-120b so this will connect instead to the open-source LLM!

(Note that if we do not include any model argument, the method will default to OpenAI which will incur costs if we had an OpenAI API key setup! But we did not so let's move on...)

Also, if a dialog pops up that says "Notebook does not have secret access", simply click "Grant access" and re-run the cell (press the "play" button next to the cell)!

Prompting the LLM

Let's finally "talk" to the LLM:

# invoke the LLM with a message (i.e. prompt!)
prompt = 'What are some topics I should know about Langchain?'
response = llm.invoke([HumanMessage(content=prompt)])
print(response.content)

The prompt variable looks self-explanatory:

  • it acts like how a user would type a prompt (a query or a question) on ChatGPT.com
  • the variable then feeds into the next line for the HumanMessage for the LLM

The response variable then saves the interaction with the LLM:

  • the result of which gets printed out

Summary

After setting up our "Google Colab with Groq" environment, we started coding by installing the necessary langchain_openai library:

!pip install langchain_openai

Then, we did some importing of packages:

from langchain_openai import OpenAIEmbeddings, ChatOpenAI
from langchain_core.messages import HumanMessage
from google.colab import userdata

Finally, we connected to and accessed the open-source API:

# the three essential variables
GROQ_API_KEY = userdata.get('GROQ')
MODEL_NAME = "openai/gpt-oss-120b"
BASE_URL = "https://api.groq.com/openai/v1"

# use ChatOpenAI to access any model 
llm = ChatOpenAI(api_key=GROQ_API_KEY,
                 base_url=BASE_URL,
                 model=MODEL_NAME
)

# invoke the LLM with a message (i.e. prompt!)
prompt = 'What are some topics I should know about Langchain?'
response = llm.invoke([HumanMessage(content=prompt)])
print(response.content)

So, altogether now:

!pip install langchain_openai

from langchain_openai import OpenAIEmbeddings, ChatOpenAI
from langchain_core.messages import HumanMessage

from google.colab import userdata

# the three essential variables
GROQ_API_KEY = userdata.get('GROQ')
MODEL_NAME = "openai/gpt-oss-120b"
BASE_URL = "https://api.groq.com/openai/v1"

# use ChatOpenAI to access any model 
llm = ChatOpenAI(api_key=GROQ_API_KEY,
                 base_url=BASE_URL,
                 model=MODEL_NAME
)

# invoke the LLM with a message (i.e. prompt!)
prompt = 'What are some topics I should know about Langchain?'
response = llm.invoke([HumanMessage(content=prompt)])
print(response.content)

Change the prompt variable with anything and we got ourselves our own "ChatGPT" within a Google Colab notebook!

⬅️ older (in snippets)
📜⚙️📃 Pre-processing steps for NLP
newer (in snippets) ➡️
Svelte essentials 🚀📚
⬅️ older (posts)
📜⚙️📃 Pre-processing steps for NLP
newer (posts) ➡️
Key components of AI agents ❇️