image 47

Give Your AI Hands: The Ultimate Guide to Function Calling

The Smartest Intern Who Can’t Do Anything

Imagine hiring a new intern. This intern is a genius. They’ve read every book ever written. You can ask them anything, and they’ll give you a brilliant, well-reasoned answer.

“What’s the best way to respond to an angry customer asking for a refund?” you ask. They draft a perfect, empathetic, brand-aligned email in seconds.

“Incredible!” you say. “Now, go look up that customer’s order in our system and send them that email.”

The intern just stares at you. Blankly. “I do not have hands,” it says in a monotone voice. “I cannot access your customer database. I cannot operate your email client. I am a brain in a jar.”

That useless intern? That’s every basic Large Language Model. It’s a miracle of intellect with no connection to the real world. It can talk the talk, but it can’t, for the life of it, walk the walk.

Today, we’re performing surgery. We’re giving the brain hands.

Why This Matters

This isn’t just a cool party trick. This is the single most important concept that separates a simple “chatbot” from a powerful “AI agent.” Without this, your AI is a glorified search engine. With it, your AI becomes a worker—an autonomous entity that can interact with your business systems.

  • Business Impact: Automate multi-step processes that involve looking up data, making decisions, and taking actions across different software (your CRM, your email, your project management tool).
  • Replaces: The human employee who acts as the “glue” between systems, copy-pasting information from a spreadsheet into an email, or looking up a customer in one tab to answer a question in another. This is the end of swivel-chair work.

When you master this, you stop building toys and start building engines. You build automations that don’t just answer questions, but get jobs done.

What This Tool / Workflow Actually Is

We’re talking about Function Calling (or as some vendors call it, “Tool Use”).

Let’s be dead simple about this. You are NOT giving the AI the ability to write and run its own code. That would be like giving your intern the keys to the server room on their first day. It’s a security nightmare.

Instead, function calling works like this:

  1. You write a set of tools (functions) in your own code, like getOrderStatus(orderId) or sendEmail(recipient, subject, body).
  2. You describe these tools to the AI in a specific format, basically giving it a menu of available actions.
  3. When a user gives the AI a command, the AI doesn’t answer directly. Instead, it thinks, “Hmm, to answer this, I need to use one of my tools.”
  4. The AI then sends you back a structured message, like: “Please run the tool getOrderStatus with the parameter orderId = '12345'.”
  5. Your code receives this message, runs your *own* trusted function, gets the result (e.g., “Shipped”), and sends that result back to the AI.
  6. Finally, the AI uses that result to form a natural language answer for the user: “Your order #12345 has been shipped!”

The AI is the manager who decides *what* to do. Your code is the trusted employee who actually *does* it. This gives you all the power of AI decision-making with all the security of running your own code.

Prerequisites

This is a small step up, but you’ve got this. We’re building on what we’ve learned.

  1. An OpenAI Account: While other models have this feature, OpenAI’s implementation is the industry standard and the easiest to learn with. You’ll need an API key from platform.openai.com.
  2. Python 3: You should have this from our last lesson. If not, get it installed.
  3. The OpenAI Library: We’ll need to install it. It’s one simple command.

That’s it. No advanced degrees in computer science required.

Step-by-Step Tutorial

Let’s build the simplest possible example: an AI that can check the weather. We’ll give it one tool, and one tool only.

Step 1: Setup and Installation

In your terminal, install the OpenAI library:

pip install openai

Now, create a new Python file called agent_with_hands.py.

Step 2: Define Your Tool

First, let’s write a simple Python function. This is our “tool.” In the real world, this function might call a weather API, but for this lesson, we’ll just fake it.

import json

def get_current_weather(location):
    """Get the current weather in a given location"""
    print(f"--- Pretending to fetch weather for {location} ---")
    if "tokyo" in location.lower():
        return json.dumps({"location": "Tokyo", "temperature": "15", "unit": "celsius"})
    elif "san francisco" in location.lower():
        return json.dumps({"location": "San Francisco", "temperature": "72", "unit": "fahrenheit"})
    else:
        return json.dumps({"location": location, "temperature": "unknown"})
Step 3: The Main Logic (The Two-Step Conversation)

This is the core of it. We will make two calls to the AI. The first to ask it what to do, and the second to give it the result of what we did.

Add this code to your file. Replace "YOUR_OPENAI_API_KEY" with your actual key.

from openai import OpenAI

# --- CONFIGURATION ---
client = OpenAI(api_key="YOUR_OPENAI_API_KEY")
model = "gpt-4o" # Or any model that supports function calling

# --- STEP 1: DEFINE TOOLS AND SEND THE FIRST REQUEST ---
def run_conversation(user_prompt):
    print(f"User: {user_prompt}")

    # The first message from the user
    messages = [{"role": "user", "content": user_prompt}]

    # Describe our available tools to the model
    tools = [
        {
            "type": "function",
            "function": {
                "name": "get_current_weather",
                "description": "Get the current weather in a given location",
                "parameters": {
                    "type": "object",
                    "properties": {
                        "location": {
                            "type": "string",
                            "description": "The city and state, e.g. San Francisco, CA",
                        }
                    },
                    "required": ["location"],
                },
            },
        }
    ]

    # First API call: Send the prompt and tools to the model
    response = client.chat.completions.create(
        model=model,
        messages=messages,
        tools=tools,
        tool_choice="auto",  # The model decides whether to call a function
    )

    response_message = response.choices[0].message

    # --- STEP 2: CHECK IF THE MODEL WANTS TO CALL A TOOL ---
    tool_calls = response_message.tool_calls
    if tool_calls:
        print("\
Model wants to call a tool!")
        # Our simple example only handles one tool call, but this could be a loop
        tool_call = tool_calls[0]
        function_name = tool_call.function.name
        function_args = json.loads(tool_call.function.arguments)

        print(f"Function to call: {function_name}")
        print(f"Arguments: {function_args}")

        # --- STEP 3: CALL YOUR FUNCTION AND GET THE RESULT ---
        if function_name == "get_current_weather":
            function_response = get_current_weather(location=function_args.get("location"))

            # --- STEP 4: SEND THE RESULT BACK TO THE MODEL ---
            print(f"Result of function: {function_response}\
")
            messages.append(response_message) # Add the assistant's turn
            messages.append(
                {
                    "tool_call_id": tool_call.id,
                    "role": "tool",
                    "name": function_name,
                    "content": function_response,
                }
            )

            # Second API call: Get a natural language response
            second_response = client.chat.completions.create(
                model=model,
                messages=messages,
            )
            final_answer = second_response.choices[0].message.content
            print(f"Final AI Response: {final_answer}")
        else:
            print(f"Error: Unknown function {function_name}")
    else:
        # The model didn't want to call a function, it just answered directly
        final_answer = response_message.content
        print(f"Final AI Response: {final_answer}")

# --- EXECUTION ---
run_conversation("What is the weather like in Tokyo?")

Run this script. Watch the output carefully. You will see it print the model’s decision, the arguments, your function’s output, and finally, the AI’s user-friendly summary. You just gave a brain hands!

Complete Automation Example

Let’s do something more useful. A simple customer service bot that can check an order status.

Here’s the full, copy-paste-ready code. We’ll define a fake customer database and a tool to look up orders.

import json
from openai import OpenAI

# --- FAKE DATABASE & TOOL ---
FAKE_ORDER_DB = {
    "123-456": {"status": "Shipped", "tracking_number": "XYZ987"},
    "789-012": {"status": "Processing", "expected_delivery": "Tomorrow"}
}

def get_order_details(order_id):
    """Retrieves the status and details for a given order ID."""
    print(f"--- Searching for order {order_id} in database ---")
    return json.dumps(FAKE_ORDER_DB.get(order_id, {"status": "Not Found"}))

# --- Main Logic (very similar to the weather example) ---

client = OpenAI(api_key="YOUR_OPENAI_API_KEY")
model = "gpt-4o"

def create_customer_service_response(user_prompt):
    print(f"Customer says: {user_prompt}")
    messages = [{"role": "user", "content": user_prompt}]

    tools = [
        {
            "type": "function",
            "function": {
                "name": "get_order_details",
                "description": "Get the status and details of a customer's order using the order ID",
                "parameters": {
                    "type": "object",
                    "properties": {
                        "order_id": {
                            "type": "string",
                            "description": "The customer's order ID, e.g., '123-456'",
                        }
                    },
                    "required": ["order_id"],
                },
            },
        }
    ]

    # 1. First call to see if a tool is needed
    response = client.chat.completions.create(model=model, messages=messages, tools=tools, tool_choice="auto")
    response_message = response.choices[0].message

    # 2. Check for tool calls
    if response_message.tool_calls:
        tool_call = response_message.tool_calls[0]
        if tool_call.function.name == "get_order_details":
            # 3. Call our function
            args = json.loads(tool_call.function.arguments)
            tool_response = get_order_details(order_id=args.get("order_id"))

            # 4. Send the result back for a final answer
            messages.append(response_message)
            messages.append({"tool_call_id": tool_call.id, "role": "tool", "name": "get_order_details", "content": tool_response})
            final_response = client.chat.completions.create(model=model, messages=messages)
            print(f"Agent says: {final_response.choices[0].message.content}")
    else:
        print(f"Agent says: {response_message.content}")

# --- EXECUTION ---
create_customer_service_response("Hi, can you check on my order #123-456 please?")
print("\
------\
")
create_customer_service_response("I can't find my order ID, it was for a red shirt.")

Notice how for the first query, it correctly finds and uses the tool. For the second, it intelligently responds that it *needs* the order ID. This is the beginning of a truly useful agent.

Real Business Use Cases

The pattern is always the same: define a tool, let the AI call it.

  1. Sales Team: A tool called add_contact_to_crm(name, email, company). The sales rep can just type “Add Jane Doe from Acme Inc, jane@acme.com, to the CRM.” The AI figures out the arguments and calls the tool.
  2. Project Management: Tools like create_task(project, title, assignee) and get_project_status(project). You can build a chatbot that lets you manage your entire project board using natural language.
  3. IT Support: A tool called reset_password(employee_email) that integrates with your Active Directory. An employee can ask a bot, “I’m locked out, can you reset my password?” and it kicks off the secure workflow.
  4. E-commerce Backend: A tool called issue_refund(order_id, amount). A customer service agent can tell the system, “Issue a full refund for order 789-012.” The AI confirms the details and calls the function that talks to your payment processor.
  5. Calendar Management: A tool called create_event(title, attendees, time, duration). You can say, “Schedule a 30-minute meeting with Bob tomorrow at 2 PM to discuss the Q3 report,” and the agent will parse it all and call your Google Calendar API.
Common Mistakes & Gotchas
  • Bad Descriptions: The function’s description is EVERYTHING. If you write a lazy or unclear description, the AI will have no idea when to use your tool. Be specific. Give examples.
  • Not Validating AI Output: Never blindly trust the arguments the AI gives you. Before you run issue_refund(order_id, amount), your code should validate that `amount` is a valid number and `order_id` looks like a real ID. Sanitize your inputs.
  • Assuming a Tool Will Be Called: Sometimes, the AI will (and should) just answer a question directly. Your code must handle the case where `tool_calls` is empty.
  • Confusing the AI: Don’t give the AI two different tools that do almost the same thing. It won’t know which one to pick. Make your tools distinct and purposeful.
How This Fits Into a Bigger Automation System

Function calling is the nervous system of your entire automation factory. It’s how the central AI brain connects to all the other machines.

  • CRM Integration: Your get_contact and update_contact tools are function calls that hit your CRM’s API.
  • Multi-Agent Workflows: You can create a “dispatcher” agent. Its only tool is route_to_specialist(task_description). Based on the user’s query, it routes the task to another, more specialized AI agent.
  • RAG Systems: You can have a tool called search_internal_documents(query). When the AI doesn’t know an answer, its first step is to call that function to retrieve relevant information before synthesizing a final response.
  • Voice Agents: Remember our real-time agent from the Groq lesson? Now you can give it hands. A user can say, “Hey, what’s on my calendar today?” The voice is transcribed, the AI decides to call the get_calendar_events() tool, your code runs it, and the final result is read back to the user in seconds.
What to Learn Next

Okay, let’s recap. We have a brain (the LLM). We made it fast (Groq). And now we’ve given it hands (Function Calling).

Our agent can think, and it can act. But it has the memory of a goldfish. Every conversation starts from scratch. It can’t remember who you are, what you talked about five minutes ago, or any information about your business that isn’t publicly available on the internet.

That’s the final piece of the puzzle: giving our agent a long-term memory.

In the next lesson, we’re diving headfirst into the world of **Vector Databases and Retrieval-Augmented Generation (RAG)**. We’re going to teach our AI how to read our company’s private documents, PDFs, and internal websites, so it can answer questions with perfect context and knowledge. We’re going to build it a real, permanent brain.

The intern is about to become your most knowledgeable employee.

“,
“seo_tags”: “llm function calling, ai agent, tool use, openai api, python tutorial, business automation, chatbot, ai automation, gpt-4, api integration”,
“suggested_category”: “AI Automation Courses

Leave a Comment

Your email address will not be published. Required fields are marked *