- 
 Optimizing LLM Inference for Faster Results Using Quantization – A Hands on GuideOptimizing LLM inference through quantization is a powerful strategy that can dramatically enhance performance while slightly reducing accuracy 
- 
Multilingual Tokenization Efficiency in Large Language Models: A Study on Indian LanguagesAuthors: Mohamed Azharudeen M, Balaji Dhamodharan 
- 
Harnessing LLMs for Time Series Forecasting: Developing a Swap-Based Hedging Strategy for Commodity TradingAuthors: Sriram Gudimella, Rohit Zajaria, Jagmeet Sarna 
- 
Elevating Fairness in Consumer Credit Assessments: A Large Language Model (LLM) Driven ApproachAuthors: Shubhradeep Nandi, Kalpita Roy 
- 
Enhancing Large Language Models: Integrating Human Preferences and Conditional Reinforcement LearningAuthors: Suvojit Hore, Gayathri Nadella, Sanmathi Vaman Parvatikar 
- 
Generative AI in Financial Crime Investigation: Enhancing Suspicious Activity AnalysisAuthors: Varun Aggarwal, Charchit Bahl, Rahav Manoharan, Pushkar Raj 
- 
Assessing the Effectiveness of Generative Adversarial Networks for Time Series Data AugmentationAuthor: Srinivas Babu Ratnam 
- 
 How to Evaluate the RAG Pipeline Cost?This article details the key factors influencing RAG pipeline cost, covering implementation, operation, and data expenses. 
- 
 HybridRAG: Merging Structured and Unstructured Data for Cutting-Edge Information ExtractionHybridRAG integrates Knowledge Graphs and Vector Retrieval to enhance accuracy and speed in complex data extraction tasks. 
- 
 Adversarial Prompts in LLMs – A Comprehensive GuideAdversarial prompts exploit LLM vulnerabilities, causing harmful outputs. This article covers their types, impacts, and defenses. 
 
								
