Personally Identifiable Information (PII) detection is critical due to the increasing exploitation of individual data, particularly in the text analytics domain. With the rise in the application of large language models (LLMs) for Natural Language Processing (NLP) solutions, data security concerns call for effective on-premises solutions and privacy-centric methods.
This paper explores the use of LLMs fine-tuned on limited domain-specific datasets for detecting and masking PII and benchmarking this solution against existing NLP methods such as BERT and GPT3.5. Our approach includes fine-tuning the Vicuna-7B LLM using the Quantized and Low Rank Adaptation (QLoRA) technique, enabling cost-effective fine-tuning and deployment on consumer GPUs; The proposed approach offers several advantages, including improved performance and reliability compared to GPT3.5, enhanced data security by keeping data within the company’s cloud, domain adaptability through model fine-tuning, and on-premise usage benefits such as reduced dependence on proprietary models, quota limitations, and flexible scaling of model hosting infrastructure.
Overall, this paper presents an efficient and secure solution for domain specific PII detection tasks using LLMs.
Access The Research Paper:
Lattice | Vol 4 Issue 3₹1,657.00