The rapid evolution of artificial intelligence (AI) and machine learning (ML) has significantly impacted various domains, with code intelligence being a notable area of advancement. Large language models (LLMs) like OpenAI’s GPT series, Google’s BERT, and others have revolutionized natural language processing (NLP) tasks. However, leveraging these models for code-related tasks introduces unique challenges due to the intricacies of programming languages and the need for precise, context-aware outputs. The paper “Granite Code Models: A Family of Open Foundation Models for Code Intelligence” addresses these challenges by introducing the Granite Code Models. These models are designed as open foundation models specifically tailored for code intelligence tasks, aiming to push the boundaries of what current LLMs can achieve in programming contexts. In this article, we will go through what Granite Code Models are, their key features, architecture, and their implementation using HuggingFace.
Table of Contents
- Understanding the Granite Code Models
- Key Features and Innovations
- Performance Highlights
- Practical Implications
- Architecture and Design
- Training Approach
- Comparative Performance
Now we will understand the Granite Code Models and also using HuggingFace we will implement it.
Understanding the Granite Code Models
Granite models deal with different programming tasks like finishing code, making code, changing code, and switching code between languages. They use a lot of training and extra polishing to work well and to fit many needs. You can find these models in sizes from 3 billion to 34 billion traits to fit various needs and resources. The Granite models are characterized by their robust architecture, comprehensive training on diverse code datasets, and advanced instruction tuning, making them highly effective for tasks such as code completion, generation, editing, and translation.
A key innovation of the Granite models lies in their extensive benchmarking and evaluation against established standards like the CanItEdit and CodeLingua benchmarks. These benchmarks assess functional correctness, precision, and the ability to follow specific programming instructions, demonstrating the Granite models’ superior performance compared to other leading models in the field, such as CodeGemma and CodeLlama.
By offering advanced capabilities in multi-language support and instruction-following, Granite models provide a substantial step forward in enhancing developer productivity and accuracy. They not only address the current limitations of existing LLMs in handling code but also set a new standard for open, accessible AI resources in the code intelligence domain.
Source: Research Paper
Key Features and Innovations
- Granite models use top-notch transformer structures designed for coding jobs. They enhance training and results by adopting mixed precision training and layer normalizing methods.
- The models face numerous tests like the CanItEdit benchmark for editing code and the CodeLingua benchmark for translating code. These tests check if the code works the first time (Pass@1) and how exact the code changes are (ExcessCode). Granite models perform better than CodeGemma and CodeLlama.
- Granite models get tuning to understand and follow coding commands. This makes them work better when they need to follow user commands.
- Granite models handle many programming languages making them useful for different developers. People have tested them with languages like C C++, Java, Python, Go, and others showing they work for many uses.
Performance Highlights
Granite’s models shine on multiple tests:
- CanItEdit Test: The Granite-34B-Code-Instruct model got a 50.28 Pass@1 rate, with just 0.25 ExcessCode making it very precise and not overdoing code changes.
- CodeLingua Test: Granite models perform well in code translation between languages keeping high levels of correctness and giving accurate detailed translations.
Practical Implications
Granite models aim to improve how much work and how right developers are in making software. They give strong tools for making, changing, and switching code, which cuts down on the time and work coding takes. These models help a lot with big jobs and tricky code where being productive and exact is very important.
Architecture and Design
Transformer Build for Coding
Granite models leverage advanced transformer architectures optimized for coding tasks. This involves using techniques like mixed precision training, which improves computational efficiency and speeds up training times, and layer normalization to stabilize and enhance performance. These architectural choices are designed to handle the complexities of programming languages effectively .
Model Sizes and Options
The Granite group has a few models with various amounts of parameters:
- Granite-3B-Code-Instruct: 3 billion parameters
- Granite-8B-Code-Instruct: 8 billion parameters
- Granite-20B-Code-Instruct: 20 billion parameters
- Granite-34B-Code-Instruct: 34 billion parameters
These different models let you pick the right one for either simple apps or heavy-duty jobs that need lots of computer power.
Source: Research Paper
Training Approach
Gathering and Preparing Data
The models named Granite learn from lots of broad datasets featuring many coding languages. The big mix of information makes sure they work well with many kinds of code work and languages. They learn from data like lots of code examples and full projects taken from places like GitHub.
Improving with Commands
A key part of how the Granite models learn is instruction tuning. They get better by practicing on data that comes with clear commands for each task. This practice makes them good at following detailed commands and great for actual coding jobs.
Comparative Performance
Granite models consistently outperform other leading models in key metrics. For instance, the Granite-34B-Code-Instruct model achieves a Pass@1 score of 50.28 on the CanItEdit benchmark, indicating high accuracy in functional correctness. Similarly, in the CodeLingua benchmark, Granite models show strong translation capabilities across languages like C, C++, Java, Python, and Go
Source: Research Paper
Implementation of Granite Code Models to Generate Code
Granite Code Models are integrated in HuggingFace which makes it easy to use them. In this section, we will see how to use Granite Code Models.
We have to load the model into our environment.
from transformers import AutoModelForCausalLM, AutoTokenizer
device = "cuda" # or "cpu"
model_path = "ibm-granite/granite-3b-code-base"
# drop device_map if running on CPU
model = AutoModelForCausalLM.from_pretrained(model_path, device_map=device)
model.eval()
Now assign input text (the task you want the model to complete) and tokenize the text. Pass these tokens to the device. And generate output tokens.
# change input text as desired
input_text = """def largest(arr, n):
# Initialize maximum element
max = arr[0]
# Traverse array elements from second
# and compare every element with
# current max
for i in range(1, n):"""
# tokenize the text
input_tokens = tokenizer(input_text, return_tensors="pt")
# transfer tokenized inputs to the device
for i in input_tokens:
input_tokens[i] = input_tokens[i].to(device)
# generate output tokens
output = model.generate(**input_tokens)
# decode output tokens into text
output = tokenizer.batch_decode(output)
# loop over the batch to print, in this example the batch size is 1
for i in output:
print(i)
The output will be something like this:
def largest(arr, n):
# Initialize maximum element
max = arr[0]
# Traverse array elements from second
# and compare every element with
# current max
for i in range(1, n):
if arr[i] > max:
max = arr[i]
return max
# Driver code
arr = [10, 324, 45, 90, 9808]
n = len(arr)
print(“Largest element is”, largest(arr, n))<|endoftext|>
Thus, Granite Code Models, as open foundation models, also contribute to the broader AI community by providing accessible, high-quality resources for further innovation and application in code-related tasks.
Conclusion
The “Granite Code Models” set a new standard in the field of code intelligence. Their advanced architecture, comprehensive training, and impressive performance on key benchmarks demonstrate their potential to significantly improve code-related tasks in software development. In essence, the Granite Code Models not only enhance current capabilities in code-related tasks but also set the stage for future advancements in AI-driven software development. Their impact is poised to be significant, offering developers powerful tools to improve productivity, accuracy, and overall code quality in an increasingly complex digital landscape.
References
- Granite Code Models: A Family of Open Foundation Models for Code Intelligence
- Granite Code Models: Github
- Link to Code
Enroll in the following course to learn about Large Language Models (LLM).