Working with LLMs in Python

Description

This hands-on lab develops the practical skills required to integrate and control large language models within applications. Participants learn how to structure effective prompts, enforce reliable outputs, implement validation and retry strategies, and build robust abstraction layers around LLM providers. The focus is on transforming experimental AI calls into secure, maintainable, and production-ready components.

Indicative Duration: 6 training hours
*Duration is adjusted based on the final scope and the target audience.


Scope

1. Setup & First LLM Call
  • Project setup
  • First API call
  • Prompt response loop
2. Prompt Engineering
Essentials
  • Zero-shot vs few-shot
  • PCTF structure (Persona/Context/Task/Format)
  • Quick prompt iterations
  • Prompt injection awareness
  • Rubric for prompt quality
3. Structured Outputs & Validation
  • Return valid JSON
  • Validation
  • Retries
  • Strict schema to reduce drift
  • Repair retry
4. Building LLM Wrappers
  • Request/Response schemas
  • Timeouts and retries
  • Logging and Error handling
  • Provider abstraction layer

Learning Objectives

Upon completion of the course participants will be able to:

  1. Configure an application environment and successfully integrate an external LLM API
  2. Apply structured prompt engineering techniques to improve output quality and reliability
  3. Enforce strict output schemas with validation, retry, and repair strategies
  4. Design resilient LLM wrapper components with proper logging, timeouts, and provider abstraction
  5. Implement safe prompt iteration practices while mitigating prompt injection risks

Target Audience

  • Roles: Software Engineers, Software Architects, Technical Leads
  • Seniority: Junior (with backend experience), Mid-Level to Senior Professionals

Prerequisite Knowledge

  • Basic Python (functions, modules, virtual environments)
  • Basic HTTP concepts (request/response)
  • Basic terminal commands

Delivery Method

Sessions can be delivered via the following formats:

  • Live Online โ€“ Interactive virtual sessions via video conferencing
  • On-Site โ€“ At your organizationโ€™s premises
  • In-Person โ€“ At Code.Hubโ€™s training center
  • Hybrid โ€“ A combination of online and in-person sessions

 

The training methodology combines presentations, live demonstrations, hands-on exercises and interactive discussions to ensure participants actively practice AI in realistic work scenarios.

Date

On Demand

Organizer

Code.Hub
Email
[email protected]