ADaSci Premium Membership fee will be revised from 1st March 2024. Lock your membership for 1 year at current price.

Parameter-efficient Fine-tuning of Large Language Models

4,973.00

  • This is a live workshop, to be held online
  • You will get the workshop joining link after your purchase
  • ADaSci members can register for free

Description

Date: 10th Feb 2024

Time: 10 AM to 1 PM

 

Unleash the power of LLMs without breaking the bank! In this workshop, dive into Parameter-efficient Fine-tuning (PEFT), the key to adapting giants like GPT for your tasks, without their monstrous resource demands. Learn cutting-edge techniques like LoRA, adapters, and prompt tuning to achieve impressive results using just a fraction of the parameters. Master efficient training, resource optimization, and model selection for real-world applications. Leave equipped to unlock the true potential of LLMs, even on limited budgets and hardware. So, join us and tame the computational beast for precise, efficient, and accessible LLM fine-tuning!

Major Outline

  1. The LLM Conundrum: Power vs. Resources
  2. Demystifying PEFT Techniques
  3. Efficient Training Strategies for PEFT
  4. Resource Optimization in PEFT
  5. Selecting the Right LLM & PEFT Technique for Your Task
  6. Advanced PEFT Techniques and Ongoing Research
  7. Hands-on Project: Build Your PEFT Model

Learning Outcomes

  1. Gain a comprehensive understanding of PEFT techniques and their benefits for LLM adaptation.
  2. Master resource-efficient training strategies and deployment options for PEFT models.
  3. Develop skills in selecting the right LLM and PEFT approach for specific tasks.
  4. Get hands-on experience building and evaluating your own PEFT model on provided datasets.

Requirements

  1. Google Colab / Jupyter notebook
  2. Good speed internet connectivity

Instructor

Dr. Vaibhav Kumar, Sr. Director at ADaSci