image 95

Claude 3 Tool Use: Give Your AI Hands

The Brain in the Jar

Imagine you have the world’s smartest assistant. Her name is Brenda. Brenda knows everything. She can answer any question, write perfect emails, and diagnose complex problems in seconds. But there’s a catch. Brenda is a brain in a jar.

You ask, “Brenda, what’s the status of order #451?”

She replies, flawlessly, “To find the status of order #451, one would typically access the company’s order management system, query the database using the unique order identifier, and retrieve the value from the ‘status’ column.”

Thanks, Brenda. Super helpful. She can tell you *how* to do the job, but she can’t *do* the job. She has no hands. She can’t click, she can’t type, she can’t log into your Shopify store.

Most AI chatbots are Brenda. They are brains in jars. Today, we’re giving Brenda hands. We’re connecting the AI’s brain to the real world, so it can stop talking about work and start doing it. Welcome back to the Academy.

Why This Matters

An AI that only processes text is a novelty. An AI that can interact with your other software is a workforce. The difference is astronomical.

The workflow we’re building today—called “Tool Use” or “Function Calling”—is the bridge between the AI’s intelligence and your business’s reality. It lets the AI operate your other tools.

  • Instead of telling you *how* to check a customer’s subscription, it checks Stripe and gives you the answer.
  • Instead of explaining *how* to create a support ticket, it connects to Jira and creates the ticket.
  • Instead of describing *how* to book a meeting, it accesses your calendar and books the meeting.

This isn’t about saving a few seconds on a Google search. This is about building autonomous agents that can execute entire business processes. This is how you build a true AI employee, not just a smarter search bar.

What This Tool / Workflow Actually Is

“Tool Use” sounds complicated, but it’s a simple, elegant conversation between your code and the AI. Think of it like a manager talking to a very smart, very literal employee.

  1. You (The Manager) give the AI a “menu” of available tools. You say, “Listen, AI, if you need to, you have access to these three tools: `get_order_status`, `check_inventory`, and `issue_refund`. Here’s exactly what each tool does and what information it needs.”
  2. A user asks the AI a question. E.g., “Where’s my stuff? My order is #ABC-123.”
  3. The AI looks at its menu. It thinks, “Aha, the user’s question matches the description for the `get_order_status` tool. That tool needs an `order_id`. I see one in the user’s message: ‘ABC-123’.”
  4. The AI does NOT run the tool. Instead, it turns back to your code and says, “Hey, Manager! I need you to run the `get_order_status` tool with the order ID `ABC-123`.”
  5. Your code (The Manager) runs the actual tool. It makes a real API call to your e-commerce database, gets the status (“shipped”), and brings the result back.
  6. Your code gives the result to the AI. “Okay, AI, I ran that for you. The status is ‘shipped’.”
  7. The AI formulates a human-friendly response. It takes the raw data and turns it into a helpful message: “Great news! Your order #ABC-123 has shipped and is on its way.”

It’s a two-step dance: first, the AI figures out *what* to do, then your code *does it* and reports back. This is a critical security feature. The AI never gets your secret API keys or direct access to your database.

Prerequisites

If you can follow a recipe, you can do this. I promise.

  1. An Anthropic Account: Go to Anthropic’s website, sign up, and create an API key. You get some free credits to start. Keep that key handy.
  2. Python 3: If you don’t have it, a quick search for “install python” will guide you. Don’t be intimidated.
  3. The Anthropic Python Library: Open your terminal or Command Prompt and run this simple command:
pip install anthropic

That’s it. You are now more qualified to build an AI agent than 99% of the world.

Step-by-Step Tutorial

Let’s build a simple AI assistant that can look up the status of a shipping order. We’ll simulate the database for now.

Step 1: Setup and Client

Create a file named agent.py. The first few lines are just for setup. Replace "YOUR_CLAUDE_API_KEY" with your actual key.

import anthropic
import json

client = anthropic.Anthropic(
    # This is the default and can be omitted
    api_key="YOUR_CLAUDE_API_KEY",
)
Step 2: Define Your Tool Menu

This is where we describe the tools available to our AI. The descriptions must be crystal clear. The AI uses these descriptions to decide which tool to use. Be specific!

tools = [
    {
        "name": "get_order_status",
        "description": "Get the current status of a shipping order. Use this for any questions about order tracking, shipping, or delivery.",
        "input_schema": {
            "type": "object",
            "properties": {
                "order_id": {
                    "type": "string",
                    "description": "The unique identifier for the order, e.g., ORD-12345."
                }
            },
            "required": ["order_id"]
        }
    }
]
Step 3: Define The Real Tool Logic

This is the Python function that does the *actual* work. In the real world, this would call the Shopify API. For our lesson, it’s a simple function that pretends to be a database.

def get_order_status(order_id):
    """This is our real tool. It looks up the order status."""
    print(f"--- Calling the real get_order_status tool for order: {order_id} ---")
    
    # A fake database
    fake_database = {
        "ORD-12345": {"status": "shipped", "tracking_number": "1Z999AA10123456784"},
        "ORD-67890": {"status": "processing", "tracking_number": None}
    }
    
    status_info = fake_database.get(order_id, {"status": "not_found", "tracking_number": None})
    return json.dumps(status_info)
Step 4: The First Conversation Turn – User to AI

Now, let’s start the conversation. We send the user’s message and our tool menu to Claude.

user_message = "Hey, where is my order? The ID is ORD-12345."

print(f"User: {user_message}")

# This is the first call to the API
message = client.messages.create(
    model="claude-3-opus-20240229",
    max_tokens=1024,
    messages=[{"role": "user", "content": user_message}],
    tools=tools,
    tool_choice={"type": "auto"}
).content[0]

print(f"AI Response Type: {message.type}")

If you run this now, you’ll see the AI’s response is not text. It will say AI Response Type: tool_use. The AI is asking us to run a tool!

Complete Automation Example

Okay, let’s put it all together into a single script that completes the entire two-step dance. This script will handle the AI’s request, run our real tool, and then send the result back to get a final, friendly answer.

Copy and paste this entire block into your agent.py file.

import anthropic
import json

# --- 1. SETUP ---
client = anthropic.Anthropic(
    api_key="YOUR_CLAUDE_API_KEY",
)

# --- 2. DEFINE YOUR TOOL "MENU" ---
tools = [
    {
        "name": "get_order_status",
        "description": "Get the current status of a shipping order. Use this for any questions about order tracking, shipping, or delivery.",
        "input_schema": {
            "type": "object",
            "properties": {
                "order_id": {
                    "type": "string",
                    "description": "The unique identifier for the order, e.g., ORD-12345."
                }
            },
            "required": ["order_id"]
        }
    }
]

# --- 3. DEFINE YOUR REAL TOOL LOGIC ---
def get_order_status(order_id):
    """This is our real tool. In a real app, this would call a database or a shipping API."""
    print(f"--- Calling the 'get_order_status' tool for order: {order_id} ---")
    fake_database = {
        "ORD-12345": {"status": "shipped", "tracking_number": "1Z999AA10123456784"},
        "ORD-67890": {"status": "processing", "tracking_number": None}
    }
    return json.dumps(fake_database.get(order_id, {"status": "not_found"}))

# --- 4. START THE CONVERSATION ---
user_message = "Hey, where is my order? The ID is ORD-12345."
print(f"\
User: {user_message}")

# Keep track of the conversation
conversation_history = [{"role": "user", "content": user_message}]

# --- 5. FIRST API CALL (AI DECIDES WHICH TOOL TO USE) ---
initial_response = client.messages.create(
    model="claude-3-opus-20240229",
    max_tokens=1024,
    messages=conversation_history,
    tools=tools,
).content[0]

conversation_history.append({"role": "assistant", "content": [initial_response]})

# --- 6. CHECK IF AI WANTS TO USE A TOOL AND EXECUTE IT ---
if initial_response.type == "tool_use":
    tool_name = initial_response.name
    tool_input = initial_response.input
    tool_use_id = initial_response.id
    
    print(f"AI wants to use the '{tool_name}' tool with input: {tool_input}")
    
    # Run the actual tool function
    tool_result = get_order_status(tool_input['order_id'])
    print(f"Tool Result: {tool_result}")

    # --- 7. SECOND API CALL (GIVE THE RESULT BACK TO THE AI) ---
    # Add the tool result to the conversation history
    conversation_history.append({
        "role": "user",
        "content": [
            {
                "type": "tool_result",
                "tool_use_id": tool_use_id,
                "content": tool_result
            }
        ]
    })
    
    # The AI now generates its final, natural language response
    final_response = client.messages.create(
        model="claude-3-opus-20240229",
        max_tokens=1024,
        messages=conversation_history,
        tools=tools,
    )
    final_text = final_response.content[0].text
else:
    final_text = initial_response.text

# --- 8. PRINT THE FINAL, HUMAN-READABLE ANSWER ---
print(f"\
AI: {final_text}")

When you run this, you will see the entire process play out, ending with a perfect, context-aware answer like: “It looks like your order ORD-12345 has shipped! The tracking number is 1Z999AA10123456784.”

Real Business Use Cases

This exact two-step pattern can automate thousands of tasks. Just swap out the tool definition and the tool logic.

  1. SaaS Helpdesk: Tool: lookup_user_subscription(email). Connects to Stripe to check a user’s plan, billing cycle, and status. It can answer “Why was my card just charged?” automatically.
  2. Internal HR Bot: Tool: get_remaining_pto(employee_id). Connects to your HR system (like Workday or BambooHR) to let employees ask a Slackbot “How many vacation days do I have left?”
  3. Sales Automation: Tool: create_crm_lead(name, email, company). Parse an email from a potential client and automatically create a new lead in Salesforce or HubSpot, assigning it to the right salesperson.
  4. E-commerce Inventory Management: Tool: check_stock_level(sku). An internal tool for the sales team to ask “Do we have any more of the blue widgets in the main warehouse?” without logging into the inventory system.
  5. Smart Home / IoT Control: Tool: set_thermostat(temperature, room). A voice assistant that can actually control your home devices by calling their specific APIs. “Hey Claude, make the living room warmer.”
Common Mistakes & Gotchas
  • Ambiguous Tool Descriptions: The AI is only as good as its menu. A tool named `getData` with the description “Gets data” is useless. Be hyper-specific: `get_user_data_from_crm_using_email`.
  • Not Handling Tool Failures: What if your Shopify API is down? Your Python function will fail. You need to catch that error and return a clear error message to the AI (e.g., `{“error”: “The order system is currently unavailable.”}`). The AI is smart enough to then tell the user, “I’m sorry, I can’t access the order system right now, please try again later.”
  • Complex JSON Inputs: Start with simple tool inputs (like a single string). If you need to pass complex data, make sure your `input_schema` is perfectly structured, as the AI will follow it to the letter.
  • Forgetting the Conversation History: On the second call, you MUST include the entire history: the original user message, the AI’s first `tool_use` response, and your `tool_result` message. The AI is stateless; you have to remind it of the whole story.
How This Fits Into a Bigger Automation System

This isn’t just for chatbots. This is a core component for building any kind of autonomous agent.

  • The “User” could be another AI. Imagine a “Manager AI” that decides on a high-level plan, then delegates tasks to “Worker AIs” by telling them which tools to use.
  • This is the heart of RAG (Retrieval-Augmented Generation). One of the most powerful tools you can give an AI is one called `search_internal_documents`. When the AI doesn’t know an answer, it can use that tool to search your company’s knowledge base, get the relevant info, and then formulate an answer.
  • Chain multiple tools. A user might ask, “Who is the main contact for our biggest client, and what was their last order?” The AI would first need to call a `get_biggest_client` tool, then use that result to call the `get_last_order` tool.

You’ve just learned the fundamental pattern for building AI systems that don’t just talk—they *do*.

What to Learn Next

You gave the brain hands. It can pick up one tool at a time and use it perfectly. You’ve built an assistant.

But what if a job requires more than one tool? What if a process has three steps? Or five? How do you build an AI that can function not just as an assistant, but as a project manager?

In our next lesson, we’re going to explore multi-step tool use. We’ll build an agent that can reason through a complex request, create a plan, call multiple tools in sequence, and even handle errors along the way. We’re moving from single tasks to complete workflows.

You’ve built the assembly line worker. Next, we build the factory foreman.

Don’t miss it.

“,
“seo_tags”: “Claude 3, Tool Use, function calling, AI agent, business automation, Python, Anthropic API, AI chatbot, autonomous agents, AI workflow”,
“suggested_category”: “AI Automation Courses

Leave a Comment

Your email address will not be published. Required fields are marked *