Description
Date: 10th Feb 2024
Time: 10 AM to 1 PM
Unleash the power of LLMs without breaking the bank! In this workshop, dive into Parameter-efficient Fine-tuning (PEFT), the key to adapting giants like GPT for your tasks, without their monstrous resource demands. Learn cutting-edge techniques like LoRA, adapters, and prompt tuning to achieve impressive results using just a fraction of the parameters. Master efficient training, resource optimization, and model selection for real-world applications. Leave equipped to unlock the true potential of LLMs, even on limited budgets and hardware. So, join us and tame the computational beast for precise, efficient, and accessible LLM fine-tuning!
Major Outline
- The LLM Conundrum: Power vs. Resources
- Demystifying PEFT Techniques
- Efficient Training Strategies for PEFT
- Resource Optimization in PEFT
- Selecting the Right LLM & PEFT Technique for Your Task
- Advanced PEFT Techniques and Ongoing Research
- Hands-on Project: Build Your PEFT Model
Learning Outcomes
- Gain a comprehensive understanding of PEFT techniques and their benefits for LLM adaptation.
- Master resource-efficient training strategies and deployment options for PEFT models.
- Develop skills in selecting the right LLM and PEFT approach for specific tasks.
- Get hands-on experience building and evaluating your own PEFT model on provided datasets.
Requirements
- Google Colab / Jupyter notebook
- Good speed internet connectivity
Instructor
Dr. Vaibhav Kumar, Sr. Director at ADaSci