Parameter-Efficient Tuning of Large Language Models (LLMs) with Novel Ensemble Knowledge Distillation Framework – Rohit Sroch

In this talk, the focus was on parameter-efficient tuning and knowledge distillation techniques to optimize large language models (LLMs).