
Working with LLMs in Python
Description
This hands-on lab develops the practical skills required to integrate and control large language models within applications. Participants learn how to structure effective prompts, enforce reliable outputs, implement validation and retry strategies, and build robust abstraction layers around LLM providers. The focus is on transforming experimental AI calls into secure, maintainable, and production-ready components.
Indicative Duration: 6 training hours
*Duration is adjusted based on the final scope and the target audience.
Scope
| 1. Setup & First LLM Call |
|
|
| 2. Prompt Engineering Essentials |
|
|
| 3. Structured Outputs & Validation |
|
|
| 4. Building LLM Wrappers |
|
|
Learning Objectives
Upon completion of the course participants will be able to:
- Configure an application environment and successfully integrate an external LLM API
- Apply structured prompt engineering techniques to improve output quality and reliability
- Enforce strict output schemas with validation, retry, and repair strategies
- Design resilient LLM wrapper components with proper logging, timeouts, and provider abstraction
- Implement safe prompt iteration practices while mitigating prompt injection risks
Target Audience
- Roles: Software Engineers, Software Architects, Technical Leads
- Seniority: Junior (with backend experience), Mid-Level to Senior Professionals
Prerequisite Knowledge
- Basic Python (functions, modules, virtual environments)
- Basic HTTP concepts (request/response)
- Basic terminal commands
Delivery Method
Sessions can be delivered via the following formats:
- Live Online โ Interactive virtual sessions via video conferencing
- On-Site โ At your organizationโs premises
- In-Person โ At Code.Hubโs training center
- Hybrid โ A combination of online and in-person sessions
The training methodology combines presentations, live demonstrations, hands-on exercises and interactive discussions to ensure participants actively practice AI in realistic work scenarios.

