Inference Unlimited

Code Generation Using Local LLM Models – Practical Examples

In today's world, artificial neural networks, especially large language models (LLMs), are becoming increasingly popular for code generation. With the ability to run models locally, developers can leverage their potential without needing cloud solutions. In this article, we will discuss how to use local LLM models for code generation, presenting practical examples.

Why Local LLM Models?

Using local LLM models has several advantages:

Setting Up the Environment

To get started, you need:

  1. LLM Model: For example, Mistral-7B or Llama-2.
  2. Libraries: transformers and accelerate from the Hugging Face library.
  3. GPU Support: For better performance.

Installing Required Libraries

pip install transformers accelerate torch

Example 1: Generating Simple Python Code

Below is an example of generating simple Python code using a local LLM model.

from transformers import AutoModelForCausalLM, AutoTokenizer
import torch

# Loading the model and tokenizer
model_name = "mistral-7b"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name)

# Preparing the prompt
prompt = "Write a Python function that calculates the sum of two numbers:"
input_ids = tokenizer(prompt, return_tensors="pt").input_ids

# Generating code
output = model.generate(input_ids, max_length=100, num_return_sequences=1)
generated_code = tokenizer.decode(output[0], skip_special_tokens=True)

print(generated_code)

Result

def sum_two_numbers(a, b):
    return a + b

Example 2: Generating Code Using the requests Library

In this example, we will show how the model can generate code using the requests library to perform an HTTP request.

from transformers import AutoModelForCausalLM, AutoTokenizer
import torch

# Loading the model and tokenizer
model_name = "mistral-7b"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name)

# Preparing the prompt
prompt = "Write Python code that performs a GET request to the JSONPlaceholder API:"
input_ids = tokenizer(prompt, return_tensors="pt").input_ids

# Generating code
output = model.generate(input_ids, max_length=150, num_return_sequences=1)
generated_code = tokenizer.decode(output[0], skip_special_tokens=True)

print(generated_code)

Result

import requests

response = requests.get("https://jsonplaceholder.typicode.com/posts/1")
print(response.json())

Example 3: Generating Code Using a Python Class

In this example, we will show how the model can generate code defining a class in Python.

from transformers import AutoModelForCausalLM, AutoTokenizer
import torch

# Loading the model and tokenizer
model_name = "mistral-7b"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name)

# Preparing the prompt
prompt = "Create a Python class that represents a rectangle:"
input_ids = tokenizer(prompt, return_tensors="pt").input_ids

# Generating code
output = model.generate(input_ids, max_length=200, num_return_sequences=1)
generated_code = tokenizer.decode(output[0], skip_special_tokens=True)

print(generated_code)

Result

class Rectangle:
    def __init__(self, width, height):
        self.width = width
        self.height = height

    def area(self):
        return self.width * self.height

    def perimeter(self):
        return 2 * (self.width + self.height)

Optimizing Code Generation

To improve the quality of the generated code, you can apply several techniques:

  1. Precise Prompts: Accurately describing what you expect.
  2. Length Control: Setting the appropriate length of the generated code.
  3. Multiple Generations: Generating several versions of the code and choosing the best one.

Summary

Generating code using local LLM models is a powerful tool for developers. With the ability to run models locally, you can enjoy privacy and control over the code. In this article, we have presented several practical examples that show how to use these technologies in daily work.

Remember that the quality of the generated code depends on the quality of the model and the precision of the prompts. Experiment with different models and techniques to achieve the best results.

Język: EN | Wyświetlenia: 13

← Powrót do listy artykułów