All Courses
Mastering Generative AI: Fine-Tuning Transformers
edX
Course
Intermediate
Free to Audit
Certificate

Mastering Generative AI: Fine-Tuning Transformers

IBM

Gain job-ready AI engineering skills for working with transformers, LLM frameworks, and GPT, and learn how to fine-tune models using Hugging Face and PyTorch.

3 hrs/week2 weeksEnglish791 enrolled
Free to Audit

About this Course

The demand for technical gen AI skills is exploding. AI engineers who know how to fine-tune transformers for gen AI applications are in hot demand. This Generative AI Engineering Fine-Tuning with Transformers course is designed for AI engineers and other AI specialists who are looking to add highly sought-after skills to their resume. In this course, you’ll explore the differences between PyTorch and Hugging Face. You’ll use pre-trained transformers for language tasks and fine-tune them for special tasks. Plus, you’ll fine-tune generative AI models using PyTorch and Hugging Face. You’ll also explore concepts like parameter-efficient fine-tuning (PEFT), low-rank adaptation (LoRA), quantized low-rank adaptation (QloRA), model quantization with natural language processing (NLP) and prompting. Plus, through valuable hands-on labs, you’ll build your experience loading models and inference, training models with Hugging Face, pre-training LLMs, fine-tuning models, and PyTorch adaptors. If you’re looking to gain the job-ready skills employers need for fine-tuning transformers for gen AI, ENROLL TODAY and power up your resume for career success! Prerequisites: This course requires basic knowledge of Python, PyTorch, and transformer architecture. You should also be familiar with machine learning and neural network concepts. 3b:T4ec, <

What You'll Learn

  • Job-ready skills working with transformer-based LLMs for generative AI engineering employers need in just 2 weeks!
  • A good understanding of parameter-efficient fine-tuning (PEFT) using LoRA and QLoRA
  • How to use pretrained transformers for language tasks and fine-tune them for specific tasks.
  • How to load models and their inferences and train models with Hugging Face.

Prerequisites

  • Basic knowledge of Python, PyTorch, and transformer architecture. You should also be familiar with machine learning and neural network concepts.

Instructors

J

Joseph Santarcangelo

PhD., Data Scientist

Course Info

PlatformedX
LevelIntermediate
PacingUnknown
CertificateAvailable
PriceFree to Audit

Start Learning Now