Machine Learning into Practice: Deep Dive into MLOps
22, 23 & 24 September 2025
This foundational course offers a comprehensive journey through the various stages of deploying and maintaining machine learning models to applications using the MLOps paradigm.
MLOps is a paradigm that aims to deploy and maintain machine learning models in production reliably and efficiently. The word is a compound of "machine learning" and the continuous delivery practice (CI/CD) of DevOps in the software field.
Through hands-on workshops participants will gain insights into the core steps of MLOps: Data preparation and versioning, model deployment, monitoring, scaling, and continuous training. They will understand the significance of having a clear understanding of what to expect in real-world scenarios when deploying a machine-learning model.
Additionally, this workshop covers the specific challenges of deploying LLMs and RAG solutions. We conclude the workshop with techniques for downscaling models to edge devices for real-time processing.
Throughout the course, participants will acquire practical skills and knowledge essential for navigating ML deployment smoothly, empowering them to face various real-world challenges.
This course is a collaboration between UGain and VAIA.
Persons with strong interest in data science, with knowledge of machine learning and Python, and data scientists working at companies looking to acquire the skills needed to deploy their models to robust and scalable applications.
You will be working with your own laptop. This must be powerful enough (minimum 8GB RAM) and participants must have administrative rights to install the necessary programs.
Participants can obtain a certificate of attendance.
- Sander Borny, Ghent University
- Rushil Daya, Dataminded
- Cedric De Boom, Lighthouse
- Robbe De Sutter, Superlinear
- Tom Goethals, Ghent University
- Jens Krijgsman, Howest
- Sam Leroux, Ghent University
- Nathan Segers, Howest
- Thomas Van den Bossche, Odisee
- Bruno Volckaert, Ghent University
Programme
Workshop: Docker & Kubernetes (4 x 1,5h)
This workshop will give you practical hands-on experience with using Docker and Kubernetes for MLOps. It will also give you the theoretical foundations to make you comfortable with relying on Docker and Kubernetes for MLOps.
- Docker basics.
- Using Docker for ML training.
- How to migrate from Jupyter Notebooks to production-ready docker containers.
- GPU acceleration in Docker containers.
- How to deploy production ML models on Kubernetes.
- Upgrading production ML models in Kubernetes.
Data and AI Pipelines and Version Control (1h)
AI projects require a lot of data. This data needs to be processed specifically for the requirements of the AI model architecture. During development, this often changes quite a lot, as the requirements grow and problems are fixed. Developers often want re-usable pipelines that are version tracked with the rest of their code.We will check out some of the interesting orchestration tools for these preprocessing pipelines, and how to keep track of the artefacts generated in between different steps.
What are the best practices? What are the hurdles to overcome? How to set up a good pipeline that's easy for developers, but provides good quality for all stakeholders?
Using Kubeflow as a pipeline orchestrator tool on Kubernetes, we will explain how to set up version control for data and AI pipelines in a good way.
Teachers: Nathan Segers & Jens Krijgsman
Applying DevOps to Machine Learning (4h)
In this hands-on workshop we offer an introduction to MLOps, focusing on how DevOps principles can be applied to machine learning. Together we will work on a mini-project in which we bring a machine learning model into production via a public cloud provider (AWS).
We will also cover important topics such as continuous integration & continuous deployment (CI/CD), backend-frontend communication, containerization, etc. We will briefly discuss how to deal with security, scaling, A/B testing, and concept drift. Throughout the session, recommended tools and best practices for MLOps will be discussed.
Teacher: Cedric De Boom
Workshop RAG Models (3h)
This workshop focuses on both the theory and practice of Retrieval-Augmented Generation (RAG) and deploying AI applications. We begin with a theoretical introduction to how RAG works, followed by a hands-on workshop to apply this knowledge. Next, we explore how such systems are deployed, with attention to observability, monitoring, and quality assurance.The training provides insight into how to evaluate models and systems. Finally, we address key challenges such as monitoring and the security of LLM models, including aspects of red teaming. This balanced mix of theory and practice makes the training suitable for a broad audience interested in AI applications.
Teacher: Rushil Daya
ML model optimizations for efficient edge AI deployment (1h)
State-of-the-art machine learning models for image recognition and natural language processing require a huge amount of computational resources that are not available on resource constrained edge devices. In this lesson, we will discuss the different options that are available to optimize these models, reducing their computational cost and memory footprint, making it possible to use them on mobile and embedded devices.
Teacher: Sam LerouxFine-Tuning Small LLMs for PII Detection and Secure Edge Deployment (3h)
In this session, you’ll learn how to fine-tune small language models to detect Personally Identifiable Information (PII), with a strong emphasis on data privacy and secure edge deployment. We'll guide you through generating synthetic training data to avoid exposing real sensitive information and demonstrate how to perform local, on-device fine-tuning.
You’ll also explore best practices for securely deploying models on edge devices while maintaining stringent security standards, including techniques for implementing robust encryption.
By the end, you’ll be equipped to develop and deploy a privacy-preserving AI application that runs entirely offline on an edge device, ensuring sensitive information remains protected throughout its lifecycle.
Teacher: Robbe De SutterIntroduction to Federated Learning with Python & Flower (2h)
In this workshop you will be introduced to Federated Learning in a practical way, a technique that allows you to train machine learning models without centrally storing the data. We will cover the basic principles, look at available frameworks and discuss the challenges of this technology. In the second, hands-on part you will get started with the Flower framework and Python to apply this knowledge in practice.
Teacher: Thomas Van den Bossche
Practical info
Fee
- 22 September 2025 . . . . . . . . . . . . . . . € 465,-
- 23 September 2025 . . . . . . . . . . . . . . . € 465,-
- 24 September 2025 . . . . . . . . . . . . . . . € 400,-
- Complete course . . . . . . . . . . . . . . . . . € 1.200,-
Payment occurs after reception of the invoice.
All invoices are due in thirty days. All fees are exempt from VAT.
Reduction
When a participant of a company registers for the complete course, a reduction of 20% is given to all additional registrations from the same company. In that case, only one invoice is issued per company.Special prices for Phd-students. For further information, please send us an email.
Cancellation policy
Cancellation must be done in writing. Our cancellation conditions can be consulted on www.ugain.ugent.be/cancellationTraining vouchers
Ghent University accepts payments by KMO-portefeuille (www.kmo-portefeuille.be; authorisation ID: DV.O103194).Opleidingsverlof (VOV)
This course has too few contact hours to qualify for VOV.
