Connecting to an LLM API with Python (for beginners)
// updated 2026-01-09 17:03
As we all know by now, ChatGPT.com is just a user interface for ChatGPT, the large language model (LLM) by OpenAI; in other words, the .com website serves as a user control panel for a body-less model!
This model, or engine, has an API to which we can directly connect, without having to go through ChatGPT.com:
- we just need to have Python setup on own computers
- get an API key from an LLM provider (such as paid OpenAI or free Groq)
- write a few lines of code to connect to the API
- feed a prompt to a Python function
Setup
Python
Let's check on our command line to see if we have Python installed:
python --versionIf that spits out a version number like 3.x.x then we can go ahead (or else find Python installation instructions at python.org)
OpenAI library
Let's install the openai library; this does not necessarily mean we will use OpenAI, but this library lets us use one of many open-source APIs:
pip install openaiAPI key
We can get a free LLM API key from a site called groq.com (not to be confused with Grok, another wrapper for an LLM with the same name!)
To register this API with our local machine, we can enter this in our command line:
export GROQAPIKEY="your_api_key_from_groq"Code
Connection to LLM
Now, we will connect to the LLM using that key by creating a file called app.py:
# app.py
import os
from openai import OpenAI
client = OpenAI(
api_key=os.environ.get("GROQAPIKEY"),
base_url="https://api.groq.com/openai/v1"
)Note that:
osallows us to get theGROQAPIKEYthat variable we had previously set in our local machinebase_urlrequired here to refer to the API on groq.com
Prompt function
In the same file, we can add a function that actually prompts the LLM:
def get_completion(prompt):
response = client.chat.completions.create(
model="llama-3.3-70b-versatile",
messages=[{
"role": "user",
"content": prompt
}],
)
return response.choices[0].message.contentThis function takes in a prompt as a parameter, calls the LLM with the prompt and then returns with whatever the LLM responds as content; note that:
- we can use any model by changing
model's value- i.e. from
llama-3.3-70b-versatileto any of their other models - this makes it easy to change to any model in the future, especially since models keep upgrading
- i.e. from
So why all this work? Well, we can even add to the prompt by changing the content's value, e.g.:
def get_completion(prompt):
response = client.chat.completions.create(
model="llama-3.3-70b-versatile",
messages=[{
"role": "user",
"content": "Don't give me the answer directly but " + prompt
}],
)
return response.choices[0].message.contentWe can then add custom "modes" via if/elif/else statements or similar - imagine the possibilities!
Calling the function
Finally, we have to call the function, so we can insert this below the get_completion function:
print(get_completion("What is the best hotel to book after disembarking from a cruise that ends in Sydney, Australia?"))Of course, feel free to change the prompt!
Runtime
Now, let's go back to the command line and run this program!
% python3 app.pyThe program will call the LLM and return the response, which would look something like:
Sydney is a beautiful city with a wide range of accommodation options. The best hotel for you after disembarking from a cruise will depend on your personal preferences, budget, and what you want to do in Sydney. Here are some top-rated hotels in different categories to consider:
**Luxury:**
1. **Shangri-La Hotel Sydney**: This 5-star hotel is located in the heart of Sydney, with stunning views of the harbor and the Opera House. It's a short walk from the cruise terminal and offers luxurious rooms, a fitness center, and a rooftop pool.
2. **Four Seasons Hotel Sydney**: This 5-star hotel is situated in the historic Rocks neighborhood, with easy access to the city's main attractions. It features elegant rooms, a world-class spa, and a rooftop pool with breathtaking views.Summary
Combining all the snippets of app.py we have:
import os
from openai import OpenAI
client = OpenAI(
api_key=os.environ.get("GROQAPIKEY"),
base_url="https://api.groq.com/openai/v1"
)
def get_completion(prompt):
response = client.chat.completions.create(
model="llama-3.3-70b-versatile",
messages=[{
"role": "user",
"content": prompt
}],
)
return response.choices[0].message.content
print(get_completion("What is the best hotel to book after disembarking from a cruise that ends in Sydney, Australia?"))We can then run this file on command line with:
% python3 app.pyThis forms only the beginning of what we can do with LLMs! We could:
- use Python's
input()methods so that the program can ask for a prompt rather than have us hard-code it in - use that starter code to build even more complex applications with more complex prompts
- build a web-based user interface and take it to a website
So, think of ChatGPT.com like driving in automatic and coding your own wrapper like driving in manual (it will take work but you will eventually feel more in control!)