Dev Genius

Coding, Tutorials, News, UX, UI and much more related to development

Follow publication

Next Js: How to use Vercel AI SDK in Next Js

Athar Naveed
Dev Genius
Published in
5 min readOct 18, 2024

--

Hello there! Recently I came across Vercel AI SDK, and looks like we don’t have to do some hard work for integrating LLM’s into our web apps now. So, let’s take a deep dive 🤿 into it.

Before Starting, you should have a Next Js app up. If you don’t have a Next Js app, you can make it from here.

Choosing LLM

After building a Next Js app, you have to choose the LLM. Vercel has a list of LLM’s available, that is here

https://sdk.vercel.ai/providers/ai-sdk-providers

GROQ API

For this blog, I’ll be using GROQ LLM’s. To get its API, you just have to sign up at GROQ, and generate its API. Like this

Creating API key in Groq cloud

Next Js

Let’s get back to our frontend. As we are using, GROQ so, we have to install some dependecies to use it in out Next App. These dependencies are:

# pnpm
pnpm add @ai-sdk/groq ai

# npm
npm i @ai-sdk/groq ai

# yarn
yarn add @ai-sdk/groq ai

# Note: Agar aap koi aur LLM jesay ky mistral use kr rhy hain,
# tou uska syntax kuch ye hai

# pnpm
pnpm add @ai-sdk/mistral ai

# npm
npm i @ai-sdk/mistral ai

# yarn
yarn add @ai-sdk/mistral ai

Let’s understand one by one, which dependency is for what purpose?

ai-sdk/groq: This dependency will allow us to use GROQ LLM’s.

ai: If you have ever connected a LLM to your app, you would have noticed that, all the content is returned at once but, you wanted a response like ChatGPT. As the LLM is generating the response, it is returning at the frontend.
So, vercel has got you covered with this dependency. Though it can offer more than just that. But, I’ll cover that in future blogs.

Frontend

Now we’ll build the interface of our app. Technically we don’t have to do anything, the vercel has done that for us, how?

'use client';
import { useChat } from "ai/react";

export default function Chat(){
const {messages, input, handleInputChange, handleSubmit} = useChat();
return (
<>
/* -------1
<div className="flex flex-col w-full max-w-md py-24 mx-auto stretch">
{messages.map(m => (
<div key={m.id} className="whitespace-pre-wrap">
{m.role === 'user' ? 'User: ' : 'AI: '}
{m.content}
*/
</div>
))}
/*---------2
<form onSubmit={handleSubmit}>
<input
className="fixed bottom-0 w-full max-w-md p-2 mb-8 border border-gray-300 rounded shadow-xl"
value={input}
placeholder="Say something..."
onChange={handleInputChange}
/>
</form>
*/
</div>
</>
)
}
Don’t worry, I’ll explain it

use client: The data in the form is filled by the user/client. That’s why we have to use the “use client” directive.

useChat: It is a hook provided by vercel that returns 4 things:

  1. messages: It is a list of those messages that are returned by the LLM in the form of response.
    We came to know that message is an Array of Objects but, what is the format so, this is actually the format (
    [
    { “role”: “user”, “content”: “Hello, how are you?” },
    { “role”: “bot”, “content”: “I am an AI, I don’t have feelings, but I am here to assist you.” },
    ]
    Its just an illustration.)

input: The prompt to the LLM.

handleInputChange: As the user is entering the prompt, it is capturing it.

handleSubmit: As the form is submitted, this method will call the “api/chat” by default.
What is “api/chat”? Don’t worry dear it’s coming ahead.

So, 1 is the place where the user message, and the LLM response is being displayed.

2 is the place where the user is entering the prompt.

Backend

Your frontend has put a request at “api/chat” endpoint but, that doesn’t exist. So, let’s make it.
Make a folder named, api inside the app folder, now make a chat folder inside the api folder. Inside the chat folder make a file named route.ts. Your folder structure will look like this:
“app/api/chat”

# folder structure

app (folder)
- api (folder)
-- chat (folder)
--- route.ts (file)

Inside the route.ts file, add this

import { createGroq } from "@ai-sdk/groq";
import { streamText,convertToCoreMessages } from "ai";

* --------1
const groq = createGroq({
apiKey: `${process.env.GROQ_API_KEY}`,
baseURL: "https://api.groq.com/openai/v1"
})
--------*

*---------2
export async function POST(req:Request){
const {messages} = await req.json();
const result = await streamText({
model: groq("llama-3.2-11b-vision-preview"),
messages: convertToCoreMessages(messages),
system:"You are a helpful assistant",
)}
return result.toDataStreamResponse();
}
-------------*

createGroq: We’ll be making our own model here. I mean we’ll be changing some default settings here.

streamText: We’ll be using this for ChatGPT type response. As the LLM starts generating the text, it’ll start rendering it on the frontend.

convertToCoreMessages: The purpose of this method is to convert the prompt, into LLM compatible format (in which a LLM can understand it).

1- createGroq: You can pass multiple params here to change the default settings of your model, currently I am using apiKey, and baseURL only.

2- POST: Here we are extracting messages sent from the frontend, and then passing it to the model. As the model starts returning the response, that will be streamed to the frontend.

That’s it. Your chatbot is ready. Run your server to start using it like this,

Working of the Chatbot

That’s it!

See you in the next blog with some more exciting work on TypeScript & Python.

Sign up to discover human stories that deepen your understanding of the world.

Free

Distraction-free reading. No ads.

Organize your knowledge with lists and highlights.

Tell your story. Find your audience.

Membership

Read member-only stories

Support writers you read most

Earn money for your writing

Listen to audio narrations

Read offline with the Medium app

--

--

Published in Dev Genius

Coding, Tutorials, News, UX, UI and much more related to development

Written by Athar Naveed

Carving a path for junior devs so that they can easily navigate through the hurdles that made my learning experience challenging. Also loves Gardening!

No responses yet

Write a response