This paper, we present a methodology for converting unstructured text into a structured question-and-answer format, specifically targeting 11 Indian languages. The scarcity of question- and-answer datasets for these languages poses a significant challenge for fine-tuning Large Language Models (LLMs) for specific tasks. We employed a ternary quantized model based on the LLaMA-2 architecture to achieve this conversion efficiently. Our model, BitNet 1.58, leverages a unique computation paradigm, reducing memory consumption and enhancing computational efficiency. The dataset was created in the Alpaca format and trained on 47 million data tokens over 3 epochs. Evaluation challenges were addressed using [3]Grice’s Maxims and AI-assisted evaluation techniques. The research demonstrates significant potential for improving data quality and expanding usability across more languages.