image 180

LLM Function Calling: A Beginner’s Guide to AI Agents

The World’s Smartest, Laziest Intern

Imagine you hire a new intern. This intern is brilliant. They graduated top of their class from every Ivy League school simultaneously, they speak 17 languages, and they can write a marketing email so persuasive it could sell a ketchup popsicle to a woman in white gloves.

There’s just one problem. This intern is trapped in a glass box. They have no phone, no computer, no hands. They can *tell* you exactly what to do, in excruciating detail, but they can’t *do* anything.

“To find the weather in London,” they’ll say with infuriating calm, “you should open a web browser, navigate to a reliable weather service, input ‘London, UK’ into the search bar, and read the current temperature.”

Thanks, kid. Super helpful.

This, my friends, is a standard Large Language Model (LLM) out of the box. It’s a brilliant conversationalist locked away from the real world. Today, we’re handing that intern a phone and a list of approved numbers. We’re giving our AI the ability to interact with the world. This is called Function Calling (or, in the new lingo, “Tool Use”), and it’s the single biggest leap from a simple chatbot to a genuine AI agent that gets work done.

Why This Matters

This isn’t a minor feature upgrade. This is the difference between a calculator and a spreadsheet. It’s the moment your AI stops being a fancy search engine and starts being an active employee.

What this replaces:

  • The Human Glue: That person in your office whose entire job is copying data from an email, looking up a customer in the CRM, and pasting an update into Slack. That’s a human API. We’re about to automate them.
  • Clunky Interfaces: Instead of making a user click through five screens to find their order status, you can just let them ask, “Where’s my stuff?”
  • Manual Data Lookups: Any task that starts with “Let me just check the system for you…” is a prime candidate for demolition.

By the end of this lesson, you will build an AI that doesn’t just talk about tasks—it executes them. It connects to your world, your data, your tools. This is where automation gets real.

What This Tool / Workflow Actually Is

Let’s be crystal clear. The LLM does NOT run your code. This is the most common and dangerous misconception. The AI doesn’t magically get access to your computer.

Here’s the real workflow, which is more like a carefully supervised conversation:

  1. You (The Developer): You write a normal piece of code, like a Python function that gets a customer’s order status from your database. You also write a simple description of this function for the AI.
  2. The User: Asks the AI, “What’s the status of order #123?”
  3. The LLM: It reads the user’s request and looks at the list of function descriptions you provided. It says, “Aha! This sounds like that `get_order_status` function. It needs an `order_id`, and it looks like the user provided one: ‘123’.”
  4. The LLM (Crucial Part): It doesn’t run the function. Instead, it sends back a perfectly formatted piece of JSON, saying: “Hey, developer! I think you should run the `get_order_status` function with the argument `order_id: ‘123’`. Please do that for me.”
  5. You (Your Code): Your application receives this JSON, runs your *actual* Python function, gets the result (e.g., “Shipped”), and sends that result back to the LLM.
  6. The LLM: It receives the result and translates it into a friendly, human-readable sentence for the user: “Great news! Your order #123 has been shipped.”

It’s a structured, secure, two-step dance. The LLM is the brain that decides *what* to do and *with what information*. Your code is the hands that *actually do it*.

Prerequisites

Don’t be nervous. If you can order a pizza online, you can do this. Here’s what you need:

  • An OpenAI API Key: This is our AI brain-in-a-box. If you don’t have one, go to `platform.openai.com` and create one. You’ll need to add a few dollars of credit (a single dollar will last you for weeks of this kind of work).
  • Python installed on your machine: We’re using Python because it’s the lingua franca of AI automation. If you don’t have it, just search for “Install Python on [Your Operating System]”.
  • The ability to copy, paste, and run a single command: Seriously, that’s it. I’ll give you everything you need. Open up your computer’s terminal or command prompt and get ready.

That’s it. No PhD in computer science required.

Step-by-Step Tutorial

Let’s build this thing. We’ll start with a classic example: a weather bot.

Step 1: Set Up Your Project

First, let’s install the OpenAI library. Open your terminal and run this command:

pip install openai

Now, create a new file named `weather_bot.py`. This is where our magic will happen. Inside that file, put this at the top to import the library and set your API key. (Important: Replace `”YOUR_API_KEY”` with your actual key!)

import openai
import json

client = openai.OpenAI(
    api_key="YOUR_API_KEY"
)
Step 2: Define Your Python Function

This is the tool our AI will use. It’s just a regular Python function. We’ll make a fake one that always returns the same weather for simplicity.

# This is our actual tool function
def get_current_weather(location):
    """Gets the current weather for a specific location."""
    # In a real app, you'd call a real weather API here
    if "boston" in location.lower():
        return json.dumps({"location": "Boston", "temperature": "72", "forecast": "Sunny"})
    elif "san francisco" in location.lower():
        return json.dumps({"location": "San Francisco", "temperature": "65", "forecast": "Cloudy"})
    else:
        return json.dumps({"location": location, "temperature": "unknown"})
Step 3: Describe the Tool for the AI

This is the most important step. We need to create a JSON description of our function so the LLM knows it exists, what it does, and what information it needs. Add this to your file:

tools = [
    {
        "type": "function",
        "function": {
            "name": "get_current_weather",
            "description": "Get the current weather in a given location",
            "parameters": {
                "type": "object",
                "properties": {
                    "location": {
                        "type": "string",
                        "description": "The city and state, e.g. San Francisco, CA",
                    },
                },
                "required": ["location"],
            },
        }
    }
]

Look closely. We told the AI the function’s `name`, its `description` (this is critical for the AI to choose it correctly), and the `parameters` it needs. We’re telling it, “There’s a tool called `get_current_weather` that needs one piece of information, a string called `location`.”

Step 4: Make the First API Call

Now we simulate a user asking a question. We send their question, along with the list of available tools, to the OpenAI API.

messages = [{"role": "user", "content": "What's the weather like in Boston?"}]

response = client.chat.completions.create(
    model="gpt-3.5-turbo",
    messages=messages,
    tools=tools,
    tool_choice="auto",  # This tells the model it can pick a tool
)

response_message = response.choices[0].message
Step 5: Check if the LLM Wants to Use a Tool

The API response will tell us if the LLM decided to call a function. Let’s inspect it. If it found a tool to use, `response_message.tool_calls` will exist.

tool_calls = response_message.tool_calls

if tool_calls:
    print("LLM wants to call a function!")
    print(tool_calls)

If you run this, you’ll see something like this printed to your screen:

LLM wants to call a function!
[ChatCompletionMessageToolCall(id='call_abc123', function=Function(arguments='{"location": "Boston"}', name='get_current_weather'), type='function')]

See? It didn’t answer the question. It came back and said, “I need to run the `get_current_weather` function with the location set to ‘Boston’.”

Step 6: Execute the Function and Get the Result

Now, our Python code needs to fulfill the request. We’ll loop through the `tool_calls`, run our *actual* Python function, and store the result.

    # This is the part where your code actually runs the function
    available_functions = {
        "get_current_weather": get_current_weather,
    }
    messages.append(response_message) # Append the AI's response to the conversation history

    for tool_call in tool_calls:
        function_name = tool_call.function.name
        function_to_call = available_functions[function_name]
        function_args = json.loads(tool_call.function.arguments)
        
        # Call your function with the arguments provided by the model
        function_response = function_to_call(
            location=function_args.get("location")
        )

        # Add the function's result to the conversation history
        messages.append(
            {
                "tool_call_id": tool_call.id,
                "role": "tool",
                "name": function_name,
                "content": function_response,
            }
        )
    
    # Now we have the result. Time for the final step...
Step 7: Send the Result Back to the LLM for a Final Answer

We’ve done the work. Now we go back to the LLM one more time, show it the result of our function call, and ask it to summarize everything for the user.

    second_response = client.chat.completions.create(
        model="gpt-3.5-turbo",
        messages=messages
    )
    final_answer = second_response.choices[0].message.content
    print("Final Answer from LLM:")
    print(final_answer)

And the output will be a beautiful, natural language answer: `The weather in Boston is 72 degrees and sunny.`

Complete Automation Example

Let’s put it all together into one, clean, copy-pasteable script for a customer support bot that can check an order status.

import openai
import json

# 1. Setup
client = openai.OpenAI(api_key="YOUR_API_KEY")

# 2. Define your real-world function
def get_order_status(order_id):
    """Gets the status of a specific order from the database."""
    print(f"--- Checking database for order: {order_id} ---")
    # In a real scenario, this would query a database.
    if order_id == "ORD12345":
        return json.dumps({"order_id": "ORD12345", "status": "Shipped", "tracking_number": "1Z999AA10123456784"})
    elif order_id == "ORD67890":
        return json.dumps({"order_id": "ORD67890", "status": "Processing"})
    else:
        return json.dumps({"error": "Order not found"})

# 3. The main automation loop
def run_conversation(user_query):
    messages = [{"role": "user", "content": user_query}]
    
    tools = [
        {
            "type": "function",
            "function": {
                "name": "get_order_status",
                "description": "Get the current status of an e-commerce order using its ID.",
                "parameters": {
                    "type": "object",
                    "properties": {
                        "order_id": {
                            "type": "string",
                            "description": "The unique identifier for the order, e.g., ORD12345",
                        },
                    },
                    "required": ["order_id"],
                },
            }
        }
    ]
    
    # First call to see if a tool is needed
    response = client.chat.completions.create(
        model="gpt-4-turbo-preview",
        messages=messages,
        tools=tools,
        tool_choice="auto",
    )
    response_message = response.choices[0].message
    tool_calls = response_message.tool_calls

    # 4. Check if the model wants to call the tool
    if tool_calls:
        available_functions = {
            "get_order_status": get_order_status,
        }
        messages.append(response_message)
        
        # 5. Execute the tool call
        for tool_call in tool_calls:
            function_name = tool_call.function.name
            function_to_call = available_functions[function_name]
            function_args = json.loads(tool_call.function.arguments)
            function_response = function_to_call(
                order_id=function_args.get("order_id")
            )
            messages.append(
                {
                    "tool_call_id": tool_call.id,
                    "role": "tool",
                    "name": function_name,
                    "content": function_response,
                }
            )
        
        # 6. Second call to get the final, natural language response
        second_response = client.chat.completions.create(
            model="gpt-4-turbo-preview",
            messages=messages,
        )
        return second_response.choices[0].message.content
    else:
        # If no tool is needed, just return the model's direct response
        return response_message.content

# --- Let's run it! ---
print("Bot: How can I help you today?")
user_input = "Hi, can you please tell me the status of my order ORD12345?"
print(f"You: {user_input}")
final_response = run_conversation(user_input)
print(f"Bot: {final_response}")

user_input = "Thanks! What about order ORD67890?"
print(f"You: {user_input}")
final_response = run_conversation(user_input)
print(f"Bot: {final_response}")
Real Business Use Cases

This isn’t just for weather bots. This one pattern can automate huge chunks of your business:

  1. SaaS Onboarding: A user signs up and a bot asks for their company name and domain. It then uses a function `create_new_account(company_name, domain)` that provisions a new instance in your backend.
  2. Internal IT Support: An employee Slacks the IT bot, “I can’t log in, can you reset my password?” The bot uses `get_employee_id_from_slack(user)` and then `initiate_password_reset(employee_id)`.
  3. Sales Lead Qualification: A chatbot on your website asks a prospect questions. When it has enough info, it calls `create_lead_in_salesforce(name, email, company, budget)` to create a new lead for the sales team.
  4. E-commerce Returns: A customer says, “I want to return order XYZ.” The bot calls `lookup_order(order_id)` to see if it’s eligible for return, and if so, calls `generate_return_label(order_id, email)` and emails them the shipping label.
  5. Content Management: A marketing manager tells a bot, “Draft a blog post about Q3 earnings and save it to WordPress.” The bot uses `generate_blog_draft(topic)` and then `upload_draft_to_wordpress(title, content)`.
Common Mistakes & Gotchas
  • Lazy Descriptions: The function’s `description` is EVERYTHING. If your description is `”gets order”`, the AI will have no idea what to do. Be specific: `”Retrieves the complete order details, including status and tracking number, using a unique order ID.”`
  • Assuming Perfect Arguments: The LLM is good, but not perfect. Always validate the arguments it sends you before running a function that touches a database or performs a critical action.
  • Ignoring Errors: What if your function call fails? The database is down, the API is offline. You must catch that error in your code and send a message back to the LLM like `{“error”: “Database connection failed”}`. The LLM can then gracefully tell the user, “I’m sorry, our systems are currently down, please try again in a few minutes.”
  • Security Is Your Job: I’ll say it again. The AI is a brilliant but sometimes naive intern. Don’t give it a function called `delete_entire_database()`. Treat every function call as an untrusted request that needs validation and permissions checks on your end.
How This Fits Into a Bigger Automation System

Function calling is the fundamental building block for almost all advanced AI automation. It’s the Lego brick you’ll use to build everything else.

  • CRM Automation: Your AI agent can now directly manipulate your CRM. It can create contacts, update deals, and log calls, all triggered by natural language commands.
  • RAG Systems: Function calling is the “R” (Retrieval) in RAG. The user asks a question, and the LLM decides to call a function `search_internal_documents(query)` to find relevant information *before* generating the answer.
  • Multi-Agent Workflows: Imagine one AI agent whose only tool is `search_the_web()`. Another agent’s only tool is `summarize_text()`. A master agent can give the first agent a task, get the results, and then hand them to the second agent for summarization. This is how complex problem-solving is orchestrated.
  • Voice Agents: When you talk to a modern voice assistant, it converts your speech to text, sends that text to an LLM which performs a function call (e.g., `set_timer(duration=10)`), gets a confirmation, and converts the text response back into speech. You just learned the core logic.
What to Learn Next

Congratulations. You have officially graduated from building chatbots to building proto-agents. Your AI can now take action. It has hands.

But right now, it can only do one thing at a time. It waits for you to tell it what to do, it uses one tool, and it stops. What if it could use multiple tools, in sequence, to achieve a more complex goal? What if you could just tell it, “Research our top three competitors and write a summary of their pricing models,” and it could figure out the steps on its own?

That, my friends, is the world of true autonomous agents. And in our next lesson in the Academy, we are going to build one. We’ll take what we learned today and bolt on a reasoning engine using a framework like LangChain, giving our AI a goal, a toolbox, and the freedom to think.

Stay sharp. The factory is just getting started.

“,
“seo_tags”: “LLM Function Calling, AI Agents, OpenAI Tools, Python AI Automation, Business Automation, API Integration”,
“suggested_category”: “AI Automation Courses

Leave a Comment

Your email address will not be published. Required fields are marked *