AI Engineering with Python & LangChain

Description

This comprehensive program equips engineers with the skills to design, build, evaluate, and deploy production-grade AI applications. Covering LLM orchestration, RAG architectures, agentic systems, model evaluation, fine-tuning, and cloud deployment, the course bridges experimentation and enterprise implementation. Participants gain practical expertise in transforming large language models into reliable, scalable, and maintainable AI systems.

Indicative Duration: 88 training hours
*Duration is adjusted based on the final scope and the target audience.


Scope

1. Python Foundations for AI Engineering
1.1 Environment Setup
  • Setting up Virtual Environments (venvs)
  • Create python projects using uv
  • Managing dependencies
  • Exploring libraries
1.2 Structuring Code
  • Functions and Classes
  • Strong Typed extensions
  • Pydantic & @dataclasses
1.3 Logging and Testing
  • Implementing basic and configured logging
  • Unit and Integration testing
2. LLM Application Development Foundations
2.1 Introduction to LangChain
  • Exploring different models & clients (OpenAI, AzureOpenAI, Groq, Local)
  • Connecting to models (local & cloud)
  • .env file configuration
  • Introduction to LangChain Expression Language (LCEL)
  • Building a simple chain with Prompts and Models
2.2 Working with Documents: Loaders & Splitters
  • Loading content from Text/PDF sources
  • Understanding LangChain Document structure
  • Text splitters and splitting strategies
2.3 Embeddings & Vector Stores
  • Deep dive into embeddings
  • Creating embeddings
2.4 Building a RAG System with Chroma
  • Understanding embeddings in depth
  • Using ChromaDB as local vector store
  • Creating and storing embeddings from documents
2.5 Advanced Retrieval Strategies
  • Beyond similarity search
  • MMR & Self-Query retrievers
  • Building advanced RAG chains
3. Agentic Systems with LangGraph
3.1 Introduction to LangGraph
  • State, Nodes, Edges
  • Building a linear graph
  • Conditional logic
  • Graph with AI integration
3.2 Implementing Agentic Tools
  • Defining tools with @tool decorator
  • LLM tool calling mechanics
  • Testing tools in isolation
3.3 Managing State & Memory
  • Conversational loops
  • Using graph state
  • Re-entrancy concepts
3.4 Persistence with
Checkpointers
  • Persistent state importance
  • LangGraph Checkpointers
  • Using MemorySaver
3.5 Composable Agents
  • Agents as tools concept
  • Designing worker agents
  • Packaging agents as tools
3.6 Recursive &
Hierarchical Agents
  • Manager agent delegation
  • Recursive workflows
  • Problem decomposition
3.7 Asynchronous Flow for Performance
  • Sync vs Async concepts
  • Async methods in LangChain
  • Async LangGraph execution
4. Observability &
Knowledge Systems
4.1 Introduction to LangSmith
  • Overview of LangSmith platform
4.2 Introduction to Knowledge Graphs
  • KG concepts (Nodes, Relationships)
  • Neo4j & Cypher
  • Populating a simple KG
4.3 LangChain &
Knowledge Graph
Integration
  • Text-to-Cypher
  • KG-RAG vs Vector-RAG
  • KG-based QA system
5. Application Interfaces
5.1 Streamlit Core Concepts
  • Building UI for LangChain
  • Interactive chat layouts
5.2 Conversational Interfaces
with Chainlit
  • Chat-first framework
  • Layout & element management
  • Chainlit vs Streamlit comparison
5.3 Advanced UI &
Feedback Loop
  • User feedback mechanisms
  • Capturing feedback
  • Handling file uploads for RAG
5.4 Interactive Chat Layout
  • Persistent chat with gr.ChatInterface
  • Connecting UI to LangGraph backend
  • Managing state in UI
6. Protocols,
Advanced Tooling &
Model Control
6.1 Introduction to MCP
  • MCP architecture
  • Consuming MCP in LangChain
  • Building first MCP server
6.2 Advanced Agent Tools
  • Structured tool wrappers
  • Async tools
  • Error handling
6.3 Tool-Based Modality
Switching
  • Multi-modal agent design
  • Router node in LangGraph
  • Modality switching logic
6.4 Advanced
Prompt Engineering
  • OpenAI parameters (temperature, top_p)
  • System roles
  • Chain-of-Thought patterns
6.5 Managing Complex Model I/O
  • Consistent prompt formats
  • Output-to-input chaining
  • Output parsing & validation
6.6 Handling Multimodal Inputs
  • Vision, audio models
  • Processing image, PDF, audio inputs
7. Evaluation &
Model Optimization
7.1 Evaluation Theory & Dataset Creation
  • LLM evaluation metrics
  • Designing evaluation datasets
7.2 Practical Evaluation with LangChain
  • Evaluation chains
  • Automated evaluation pipeline
  • Result analysis
7.3 Additional Evaluation
Strategies
  • Hallucination testing
  • Anti-hallucination strategies
8. Model Ecosystem
& Fine-Tuning
8.1 The World of Hugging Face
  • HF Hub & libraries
  • Speech-to-Text use cases
8.2 Fine-Tuning
  • Why & when to fine-tune
  • Preparing datasets
8.3 Model Training Loop
  • Trainer API setup
  • Training configuration
  • Saving model artifacts
8.4 Evaluating Fine-Tuned
Models
  • Qualitative vs Quantitative evaluation
  • BLEU/ROUGE metrics
  • Human-in-the-loop evaluation
8.5 Integrating Small &
Fine-Tuned Models
  • HF Inference API
  • LangChain wrapper
  • SLM in LangGraph workflow
9. Cloud Deployment
& MLOps in Azure
9.1 Azure Functions
  • Serverless concepts
  • HTTP Trigger endpoint
  • Additional triggers & data binding
  • Deploying LangChain as web API
9.2 Azure AI Search
  • From keywords to vectors
  • Search architecture
  • Index, indexer, pipeline
9.3 Professional Project Setup & MLOps
  • Project goal tracking
  • Git workflow
  • CI/CD with GitHub Actions
9.4 Introduction to MCP & Azure ML
  • AML Workspace setup
  • Managing data & model assets
9.5 Production Pipelines & Compliance
  • Azure-ready ML pipelines with MCP
  • Responsible AI compliance
  • Monitoring deployed pipelines
9.6 Azure AI Studio &
Prompt Flow
  • Platform overview
  • Building visual Prompt Flows
  • Integrating LLMs & tools
9.7 Advanced Prompt Flow
  • Python tools in flows
  • A/B testing variants
  • Dynamic inputs
9.8 Deployment & Evaluation
with Prompt Flow
  • Built-in evaluation tools
  • Model hosting options
  • Managed endpoint deployment

Learning Objectives

Upon completion of the course participants will be able to:

  1. Design and implement end-to-end LLM-powered applications using LangChain and LangGraph.
  2. Build advanced Retrieval-Augmented Generation (RAG) systems with vector databases and knowledge graphs
  3. Engineer stateful, tool-enabled, multi-agent architectures with orchestration logic
  4. Apply advanced prompt engineering, model control, and multimodal processing techniques
  5. Evaluate, benchmark, and improve model reliability using structured evaluation pipelines
  6. Fine-tune and integrate open-source models into production workflows
  7. Deploy AI applications using cloud-native architectures and MLOps best practices

Target Audience

  • Roles: AI Engineers, Software Engineers, Solution Architects, Technical Leads
  • Seniority: Junior to Senior Professionals

Prerequisite Knowledge

  • Solid Python foundations
  • Beneficial but not mandatory
    • Understanding of APIs and RESTful services
    • Familiarity with JSON and structured data handling
    • Basic knowledge of machine learning and LLM concepts
    • Experience with Git and software development workflows
    • Cloud fundamentals

Delivery Method

Sessions can be delivered via the following formats:

  • Live Online โ€“ Interactive virtual sessions via video conferencing
  • On-Site โ€“ At your organizationโ€™s premises
  • In-Person โ€“ At Code.Hubโ€™s training center
  • Hybrid โ€“ A combination of online and in-person sessions

 

The training methodology combines presentations, live demonstrations, hands-on exercises and interactive discussions to ensure participants actively practice AI in realistic work scenarios.

Date

On Demand

Organizer

Code.Hub
Email
[email protected]