Build Your AI Translator With This Next.JS Tutorial

Using Next.js to code an Ai translation tool.

In this Next.js tutorial, we will teach you how to easily build your very own AI translator powered by Open AI’s GPT 4. Of course, this app will be compatible with other ChatGpt alternatives if you are not a fan of Open AI’s tools.

The Next.js tutorial's preparation phase

First, let’s quickly discuss who is this tutorial for and the basic requirements to complete it successfully.

  • We wrote this tutorial with beginners in mind. Intermediate and advanced coders can skip most explanations and push through the code quickly.
  • You will need a basic understanding of JavaScript, React,and Next.js.
  • You should be familiar with Chakra UI or a similar UI-building component library.
  • You will need a ChatGPT 4 API key (or any other model’s key).

Now that you understand your requirements, let’s define our translator (I promise, it will be quick).

What is an AI translator?

Our definition of an AI translator is an application developed using Next.js that helps you translate languages using Large Language Models (LLMs). By combining artificial intelligence and translation you can create an incredibly efficient workflow.
Once you follow all the steps, your GPT 4 application should look like this:

Start building your AI translator

First, set up a new Next.js project by running the following commands in your terminal:
				
					npx create-next-app@latest gpt-translate
				
			
On installation, you’ll see the following prompts:
				
					Would you like to use TypeScript? No / Yes
Would you like to use ESLint? No / Yes
Would you like to use Tailwind CSS? No / Yes
Would you like to use `src/` directory? No / Yes
Would you like to use App Router? (recommended) No / Yes
Would you like to customize the default import alias? No / Yes
				
			
Navigate to the apps directory and install ChakraUI and other packages by running the following commands:

				
					cd gpt-translate
npm i @chakra-ui/react @chakra-ui/next-js @emotion/react @emotion/styled framer-motion react-icons react-loader-spinner xlsx dotenv eventsource-parser
				
			
In this demo, we are using the app directory setup for Next.js 13. To use the Chakra UI in the app directory, create a provider.js file in the app directory. Refer to the project structure below.

In this Next.js tutorial, we are using the app directory setup for Next.js 13. To use the Chakra UI in the app directory, create a provider.js file in the app directory.

📣 Refer to the project structure above on the right, if you are lost in the process ➡️
Project structure of the Next.js tutorial

Once created, paste the code below: 

				
					// app/providers.js
'use client'

import { CacheProvider } from '@chakra-ui/next-js'
import { ChakraProvider } from '@chakra-ui/react'

export function Providers({ children }) {
  return (
    <CacheProvider>
      <ChakraProvider>
        {children}
      </ChakraProvider>
    </CacheProvider>
  )
}
				
			
Next, edit the layout.js by importing <Providers /> and wrapping the children with the same.
As a side note, from this section below, we’ve been dealing with a delightful Elementor issue that will make it difficult for you to copy our code. If it feels too annoying, feel free to use our repository for the code directly. 
The first batch of code for your AI Translator.
Run npm run dev to run the application. You should be able to access the app at http://localhost:3000 and see the page below. This shows you are on track so far!

How the GPT 4 application should look like so far in the Next.js tutorial

Building the UI of your AI translator

ChakraUI is a powerful library designed to build clean UIs quickly. As a developer, you need to learn how to use UI libraries for rapid development. The UI will consist of a simple table with a source and target column.

Drag-and-drop feature

We will put in place the drag-and-drop feature to upload the source terms from an Excel file. If you haven’t already, now would be a good time to put your strings in an Excel file.
Done? Great, we can proceed with the drag-and-drop feature:
  • Create a Components folder in the src directory and inside it create DragFile.js (Drag & drop component)

2nd batch of code in the Next.js tutorial
  • Import the DragFile into src/page.js (Home component) and replace the html with the <DragFile />component. 
    3dbatch of code to build your AI Translator

    You should end up with this ⬇️

    Now, we need to create a drag-and-drop function for handling file drops. It is called when a file is dropped onto the Home component. Below is the code for the onDrag event handler.

    4th batch of code designed to create a drag & drop function for handling file drops.

    This function does the following:

    • It reads the contents of the dropped Excel file.
    • It extracts the data from specific headers (Source and Target)
    • It updates the state with the extracted data for further use in the AI translator. 
    • It provides you with feedback in case there is a missing header through a visual toast ➡️

    Table component

    Since many translators use Excel to translate, our UI shouldn’t be too different. We don’t want our translators to re-learn everything from scratch. As such, we will stick to a tabular UI.
     
    Let’s start by creating a TextBox row component. it will be mapped in the Table and will have the source & target strings.
    5th code to create a table component in the Next.js tutorial
    The source and the translated texts are inside editable divs so editing them is easy. The translation state is updated by the translation string while the source string is passed down from the Table component as a prop.
     
    The next step will be to create a table in the Home component.
    • Create the Table and map the fileData in the TextBox component.
    • Pass the handleFileDrop function to the onDrop attribute event listener and call event.preventDefault() to prevent the default behavior of the onDragOver event.
    - Create the Table and map the fileData in the TextBoxcomponent. - Pass the handleFileDrop function to the onDrop attribute event listener and call event.preventDefault() to prevent the default behaviour of the onDragOver event.
    You can test the GPT-4 application with an .xlsx file and you should get this output.

    How to integrate Chat Gpt into your AI translator?

    To integrate chat GPT you need to create a secret API key and use it to authenticate your requests to their API endpoint.
    Keep in mind the GPT 4 API costs. This part of the tutorial is the only one that isn’t free. That being said, if you create a new OpenAI account, you will receive $5 of tokens for free. That amount is more than enough to thoroughly test our AI Translator. Visit the documentation for more information.

    Location of OpenAI's API key
    Location of OpenAI's API key
    Now that you have created it, integrate Chat GPT by creating an .env file in your root directory and add your API key as follows: 
    OPENAI_API_KEY = *******

    📣

    Make sure to save your OpenAI API Key in an environment variable.

    Translation API endpoint

    Create an API endpoint in your Next.js app that will call the chat completions OpenAI API. To do this, create an API folder with the following structure.
    The Next.js tutorial path for your AI Translator project.
    The Next.js tutorial path for your AI Translator project.
    In your route.js, add the code below, and you may change the payload values according to your goals. Have a quick read of the request body here.

    You can now change the payload values as you see fit.

    Translation function handler

    Finally, let’s create the translation function.
     
    You can go through the snippet to see if you can get a gist of it, or you can follow the explanation below it.
     
    1. Create the function in the TextBox component.
    The 8th batch of code in your AI Translator

    The function takes an id as its argument and uses it to get the innerText of the source table data. Then, it uses it as part of the prompt message that it sends to the /generate endpoint. 

    Before we move on, let’s look into the systemMsg and prompt variables.

    systemMsg – This variable stores GPT’s system message. It is a special type of input used to provide instructions, context, or more information to guide GPT’s response. It helps with the following:

      • The quality of the output
      • It provides context.
      • It factilitates continunity.
      • It lets you define style and tone.
      • It conveys task-specific instructions.
      • Avoid undesirable output (specific terms, tone, style, etc.)
      • It enhances user experience.

    Prompt – This variable stores the ChatGPT prompts. It provides context or instructions for the model’s response.

    2. Add the translate function to the onClick event handler on the translate Button in the TextBox component.

    The 10th batch of code for your AI Translator. You should add the translate function to the onClick event handler on the translate Button in the TextBox component.

    With that said, the core application is finally ready to be tested! Go ahead and try it.

    If you’ve made it this far, you should pat yourself on the back. Creating an AI translator is no easy feat. That being said, you probably noticed some flaws. For instance, what’s with the loading time?

    Unlike machine translation software such as Google Translate, Language models are slower.
    The thing is, when you send a translation request to OpenAI, it is generated fully before being sent back in a single response.
    The AI translator sends a translation request to the OpenAI api which then sends it to GPT 4 when the model is done translating it sends it back to the API which then sends it back to the Ai translation application
    And you guessed it, as the source text grows, so does the wait time. That’s a big problem from a UX perspective. Whether you are building this tool for yourself or your translators, as people, we do not like to wait. Even if LLMs could produce better quality than a regular MT, it won’t matter if it’s at the expense of speed. Remember, convenience always wins.

    Luckily, we found a workaround that we’ll share with you.

    We can ensure that our GPT application generates a continuous stream of text. In simple terms, GPT types the response in a human-like tempo. If you’re familiar with the ChatGPT website, you should have an idea of what we are talking about. It’s not an accidental feature. It gives the translated strings word by word, and more importantly, it gives the user a sense of progress.

    GPT text streaming

    For the current app to stream translations, we need to refactor some parts of the code and create an OpenAIStream helper function.
    • Create OpenAIStream.js in the lib folder containing the code below.

    Let's create an OpenAIStream helper function. - Create OpenAIStream.js in the lib folder containing the code below.
    This helper function sends a POST request to OpenAI with the payload, like the previous version, but the similarities stop here. Create a stream to continuously parse the data received from the OpenAI API while waiting for the [DONE] token, since this means the end.
    When this happens, we close the stream.
    Now, set stream:True in the payload object used in our generate endpoint. This will return an object that streams back the response as data-only server-sent events. Define a config variable and set the runtime to “edge” as below. This is all you need to define this API route as an Edge Function.
    You can think of Edge functions as serverless functions with a lightweight runtime. They have a smaller code size limit, and smaller memory, but don’t support all Node.js libraries. They are useful when you need to interact with data over the network as fast as possible.

    // api/generate/route.js import { OpenAIStream } from "../../lib/OpenAIStream"; export const config = { runtime: "edge", }; export async function POST(req) { const { prompt } = await req.json(); const payload = { messages: prompt, max_tokens: 100, temperature: 0.7, n: 1, model: "gpt-4", stream: true, }; const stream = await OpenAIStream(payload); const res = new Response(stream); return res; }
    In our TextBoxcomponent, the only code that changes is our translate function. Basically, we define a reader using the native web API, getReader(), and progressively add data to our translation state as it’s streamed in.

    The last bit of code in our next.js tutorial. Now your AI translator should be working amazing.

    We finally refactored our GPT 4 application to use Edge Functions with streaming. It makes it faster, and it really improves the user experience. Especially for the impatient translators out there.

    Try it!