TrueschoTruescho
All Courses
Evaluating and Debugging Generative AI
Coursera
Guided Project
Unknown

Evaluating and Debugging Generative AI

DeepLearning.AI

Explore MLOps tools for managing experiments, data versioning, and collaboration in generative AI projects using the Weights & Biases platform.

Unknown1 weeksEnglish

About this Course

Machine learning and AI projects require managing diverse data sources, vast data volumes, model and parameter development, and conducting numerous test and evaluation experiments. Overseeing and tracking these aspects of a program can quickly become an overwhelming task. This course will introduce you to Machine Learning Operations tools that manage this workload. You will learn to use the Weights & Biases platform which makes it easy to track your experiments, run and version your data, and collaborate with your team

What You'll Learn

  • Evaluate programs using LLMs and generative image models with platform-independent tools
  • Instrument training notebooks and add tracking, versioning, and logging
  • Implement monitoring and tracing for LLMs in complex interactions over time

Prerequisites

  • Basic familiarity with the software or workflow used
  • Ability to follow step-by-step instructions in English

Instructors

C

Carey Phelps

Topics

Data Analysis
Data Science
Version Control
MLOps (Machine Learning Operations)
Large Language Modeling
Metadata Management
Jupyter
Data Store
AI Workflows

Course Info

PlatformCoursera
LevelUnknown
PacingUnknown
PriceFree

Skills

تحليل البيانات
علوم البيانات
التحكم في النسخ
عمليات تعلم الآلة (MLOps)
نمذجة اللغات الكبيرة
إدارة البيانات الوصفية
جوبتر
تخزين البيانات
AI Workflows

Start Learning Now