AI Instructor Live Labs Included

OpenAI: Tool Calling & Structured Outputs

Build production-grade tool-calling systems with JSON Schema definitions, parallel execution, Pydantic structured outputs, and role-based permission enforcement.

Intermediate
9h 55m
10 Lessons
OPENAI-201
OpenAI Tool Calling Developer Badge

View badge details

About This Course

Build reliable tool-calling systems using the OpenAI function calling API with parallel execution, permission boundaries, and production-grade error handling. Learn to define JSON Schema tool definitions, orchestrate parallel tool calls, enforce structured outputs with Pydantic, implement role-based tool permissions, and build a production tool orchestration runtime with retry logic and execution logging.

Course Curriculum

10 Lessons
01
AI Lesson
AI Lesson

JSON Schema Tool Definitions

30m

Learn how function calling works in the OpenAI Responses API. Covers defining tool schemas with JSON Schema, strict mode, tool_choice parameter, and designing reliable tool interfaces with clear names, descriptions, and parameter definitions.

02
Lab Exercise
Lab Exercise

Weather Lookup Tool - Lab Exercises

1h 25m 1 Exercises

Build a weather assistant with a get_weather(location, unit) tool. Implements the complete tool invocation loop: model decides to call the tool, arguments are parsed and executed, the result is fed back, and the model generates a natural language response. Demonstrates JSON Schema tool definition, tool_choice=auto, and non-tool fallback.

Complete the Tool Invocation Loop Implement run_weather_assistant() to call the OpenAI Responses API with the get_weather tool schema, detect function calls in response.output, execute the tool, feed the result back, and return the final response. ~40 min
03
AI Lesson
AI Lesson

Parallel Tool Calls & Tool Execution Loops

30m

Learn how to handle multiple simultaneous tool calls from the model, build retry-safe execution loops, handle tool errors gracefully, map tool call IDs to results, and understand the difference between deterministic and non-deterministic tools.

04
Lab Exercise
Lab Exercise

Multi-Source Aggregation Pipeline - Lab Exercises

1h 25m 1 Exercises

Build a research aggregator that calls weather, news, and stock price tools in parallel — assembling a combined briefing from concurrent results. Demonstrates parallel tool execution, collecting multiple tool call IDs, executing all tools, and feeding all results back in a single follow-up request.

Implement Parallel Tool Execution Implement run_briefing_assistant() to send a compound query to the model with all three tools, collect all parallel function calls from response.output, execute each tool, build a follow-up request with all tool results, and return the assembled briefing. ~40 min
05
AI Lesson
AI Lesson

Structured Outputs with Pydantic

30m

Learn to use response_format with JSON Schema to guarantee output structure, use Pydantic models as the schema source, enforce strict mode, handle refusals, and build data extraction pipelines that turn unstructured text into typed objects.

06
Lab Exercise
Lab Exercise

Data Extraction Pipeline - Lab Exercises

1h 30m 2 Exercises

Extract structured contact information from unstructured email text and line items from invoice text into typed Pydantic models. Uses client.responses.parse() with ContactInfo and InvoiceData Pydantic schemas. Covers handling Optional fields, nested models, and refusal detection.

Contact Information Extraction Implement extract_contact() using client.responses.parse() with the ContactInfo Pydantic model. Handle Optional fields (email, phone, company, role) that may not appear in the text, and detect refusals via output_parsed is None. ~20 min
Invoice Line Item Extraction Implement extract_invoice() using client.responses.parse() with the InvoiceData Pydantic model (which contains a nested list[LineItem]). Extract all line items, subtotal, tax, and total_due exactly as written. Handle refusal case. ~25 min
07
AI Lesson
AI Lesson

Tool Permission Boundaries & Output Verification

30m

Learn to restrict tool access by context, build tool access sandboxes with allowlists and denylists, enforce schema validation on tool outputs, guard against tool abuse with rate limits, and design least-privilege tool sets.

08
Lab Exercise
Lab Exercise

Tool Permission Sandbox - Lab Exercises

1h 30m 2 Exercises

Build a role-based tool permission system where admin, editor, and viewer roles each have different tool access. The model only receives the tool schemas allowed for the current role, and all tool calls are verified against role permissions before execution.

Role-Based Tool Filtering Implement get_tools_for_role() to look up allowed tool names from ROLE_PERMISSIONS and filter ALL_TOOLS to only return schemas whose name is in the allowed set. Viewer gets read_document only; editor gets read + write; admin gets all tools. ~15 min
Permission-Aware Assistant Implement run_with_role() to call get_tools_for_role(role), send the user query with only the allowed tools, handle any tool calls using execute_tool(name, args, role) which enforces a second permission check at execution time, and return the final response. ~30 min
09
AI Lesson
AI Lesson

Capstone Briefing: Production Tool Orchestration Runtime

20m

Reviews all Course 201 concepts: tool schemas, parallel calling, execution loops, structured outputs, and permissions. Previews the capstone project architecture — a production-grade tool orchestration runtime with retry logic, permission enforcement, schema validation, and structured logging.

10
Lab Exercise
Lab Exercise

Capstone Project: Tool Orchestration Runtime - Lab Exercises

1h 45m 3 Exercises

Build a production-grade tool orchestration runtime with a tool registry, role-based access control, parallel tool execution, retry logic with exponential backoff, JSON Schema output validation, and a structured execution log tracking tool name, arguments, result, latency, and success/failure per invocation.

Output Schema Validation Implement validate_output() to check that all required fields from the tool's output_schema are present in the result dict, and that each field's Python type matches the schema type string ("string"->str, "integer"->int, "number"->float). Return True only if all checks pass. ~15 min
Tool Execution with Retry Implement execute_tool_with_retry() with a retry loop that calls the tool implementation, sleeps for 2**attempt seconds on failure, validates output on success using validate_output(), and returns a ToolExecution log entry with success/failure status, latency_ms, and attempt count. ~20 min
Full Orchestration Loop Implement run_orchestration() to filter tools by role, call the Responses API, execute each tool call using execute_tool_with_retry(), build tool result messages, make the follow-up call, and return (output_text, execution_log). Test all four scenarios including permission denial and retry recovery. ~25 min

This course includes:

  • 24/7 AI Instructor Support
  • Live Lab Environments
  • 5 Hands-on Lessons
  • 6 Months Access
  • Completion Badge
  • Certificate of Completion
OpenAI Tool Calling Developer Badge

Earn Your Badge

Complete all lessons to unlock the OpenAI Tool Calling Developer achievement badge.

Category
Skill Level Intermediate
Total Duration 9h 55m
OpenAI Tool Calling Developer Badge
Achievement Badge

OpenAI Tool Calling Developer

Awarded for completing Tool Calling and Structured Outputs. Demonstrates ability to define JSON Schema tool definitions, orchestrate parallel tool calls, enforce structured outputs with Pydantic, implement RBAC tool permissions, and build production orchestration runtimes.

Course OpenAI: Tool Calling & Structured Outputs
Criteria Complete all lessons and exercises in OPENAI-201: Tool Calling and Structured Outputs
Valid For 730 days

Skills You'll Earn

Tool Calling JSON Schema Parallel Execution Pydantic Structured Outputs RBAC Retry Logic

Complete all lessons in this course to earn this badge