Edit

Share via


Build a translation app with LangChain

Important

  • Foundry Local is available in preview. Public preview releases provide early access to features that are in active deployment.
  • Features, approaches, and processes can change or have limited capabilities, before General Availability (GA).

This tutorial shows you how to build a translation app with the Foundry Local SDK and LangChain using a local model to translate text between languages.

Prerequisites

Before starting this tutorial, you need:

Install Python packages

You need to install the following Python packages:

pip install langchain[openai]
pip install foundry-local-sdk

Tip

We recommend using a virtual environment to avoid package conflicts. You can create a virtual environment using either venv or conda.

Create a translation application

Create a new Python file named translation_app.py in your favorite IDE and add the following code:

import os
from langchain_openai import ChatOpenAI
from langchain_core.prompts import ChatPromptTemplate
from foundry_local import FoundryLocalManager

# By using an alias, the most suitable model will be downloaded
# to your end-user's device.
# TIP: You can find a list of available models by running the
# following command: `foundry model list`.
alias = "qwen2.5-0.5b"

# Create a FoundryLocalManager instance. This will start the Foundry
# Local service if it is not already running and load the specified model.
manager = FoundryLocalManager(alias)

# Configure ChatOpenAI to use your locally-running model
llm = ChatOpenAI(
    model=manager.get_model_info(alias).id,
    base_url=manager.endpoint,
    api_key=manager.api_key,
    temperature=0.6,
    streaming=False
)

# Create a translation prompt template
prompt = ChatPromptTemplate.from_messages([
    (
        "system",
        "You are a helpful assistant that translates {input_language} to {output_language}."
    ),
    ("human", "{input}")
])

# Build a simple chain by connecting the prompt to the language model
chain = prompt | llm

input = "I love to code."
print(f"Translating '{input}' to French...")

# Run the chain with your inputs
ai_msg = chain.invoke({
    "input_language": "English",
    "output_language": "French",
    "input": input
})

# print the result content
print(f"Response: {ai_msg.content}")

Note

One of key benefits of Foundry Local is that it automatically selects the most suitable model variant for the user's hardware. For example, if the user has a GPU, it downloads the GPU version of the model. If the user has an NPU (Neural Processing Unit), it downloads the NPU version. If the user doesn't have either a GPU or NPU, it downloads the CPU version of the model.

Run the application

To run the application, open a terminal and navigate to the directory where you saved the translation_app.py file. Then, run the following command:

python translation_app.py

Prerequisites

Before starting this tutorial, you need:

Install Node.js packages

You need to install the following Node.js packages:

npm install @langchain/openai @langchain/core
npm install foundry-local-sdk

Create a translation application

Create a new JavaScript file named translation_app.js in your favorite IDE and add the following code:

import { FoundryLocalManager } from "foundry-local-sdk";
import { ChatOpenAI } from "@langchain/openai";
import { ChatPromptTemplate } from "@langchain/core/prompts";

// By using an alias, the most suitable model will be downloaded 
// to your end-user's device.
// TIP: You can find a list of available models by running the 
// following command in your terminal: `foundry model list`.
const alias = "phi-3-mini-4k";

// Create a FoundryLocalManager instance. This will start the Foundry 
// Local service if it is not already running.
const foundryLocalManager = new FoundryLocalManager()

// Initialize the manager with a model. This will download the model 
// if it is not already present on the user's device.
const modelInfo = await foundryLocalManager.init(alias)
console.log("Model Info:", modelInfo)

// Configure ChatOpenAI to use your locally-running model
const llm = new ChatOpenAI({
    model: modelInfo.id,
    configuration: {
        baseURL: foundryLocalManager.endpoint,
        apiKey: foundryLocalManager.apiKey
    },
    temperature: 0.6,
    streaming: false
});

// Create a translation prompt template
const prompt = ChatPromptTemplate.fromMessages([
    {
        role: "system",
        content: "You are a helpful assistant that translates {input_language} to {output_language}."
    },
    {
        role: "user",
        content: "{input}"
    }
]);

// Build a simple chain by connecting the prompt to the language model
const chain = prompt.pipe(llm);

const input = "I love to code.";
console.log(`Translating '${input}' to French...`);

// Run the chain with your inputs
chain.invoke({
    input_language: "English",
    output_language: "French",
    input: input
}).then(aiMsg => {
    // Print the result content
    console.log(`Response: ${aiMsg.content}`);
}).catch(err => {
    console.error("Error:", err);
});

Note

One of the key benefits of Foundry Local is that it automatically selects the most suitable model variant for the user's hardware. For example, if the user has a GPU, it downloads the GPU version of the model. If the user has an NPU (Neural Processing Unit), it downloads the NPU version. If the user doesn't have either a GPU or NPU, it downloads the CPU version of the model.

Run the application

To run the application, open a terminal and navigate to the directory where you saved the translation_app.js file. Then, run the following command:

node translation_app.js