Next Js: How to use Vercel AI SDK in Next Js

Athar Naveed
9 min readJust now

--

Assalam o Alaikum and Hello there! Recently I came across Vercel AI SDK, and looks like we don’t have to do some hard work for integrating LLM’s into our web apps now. So, let’s take a deep dive 🤿 into it.

Roman Urdu
(For reading in English, scroll to the bottom)

Shuru krny ky lye aap ky pass aik Next Js ka project hona chahye. Agar aap ky pass nhi hai tou, aap yahan sy dekh kr bana skty hain.

LLM Konsa

Next Js ka project banany ky baad apnay choose krna hai ky, apnay konsa LLM use krna hai. Vercel ky pass aik list of LLM’s available hai, joky aap yahan pr dekh skty hain:

https://sdk.vercel.ai/providers/ai-sdk-providers

GROQ API

Mein is tutorial ky lye Groq use krun ga, isky lye apko sb sy pehly Groq pr sign up krky Groq API lena hoga, Groq API token aap yahan sy generate kr skty hain

Creating API key in Groq cloud

Next Js

Ab wapis atay hain apnay Next Js ky project ki taraf, kyunky hum Groq ky LLM’s ko use krein gy tou humein kuch dependencies ko install krna hoga, joky hain

# pnpm
pnpm add @ai-sdk/groq ai

# npm
npm i @ai-sdk/groq ai

# yarn
yarn add @ai-sdk/groq ai

# Note: Agar aap koi aur LLM jesay ky mistral use kr rhy hain,
# tou uska syntax kuch ye hai

# pnpm
pnpm add @ai-sdk/mistral ai

# npm
npm i @ai-sdk/mistral ai

# yarn
yarn add @ai-sdk/mistral ai

Tou aik, aik krky smjhty hain ky hum kon si dependencies ko kis lye install kr rhy hain.

  1. ai-sdk/groq: Ye dependency humein allow kry gi ky hum Groq ky LLM’s ko use kr skein.
  2. ai: Agar apnay kbhi, apni app mein LLM connect kya hoga tou apko idea hoga ky LLM’s ka response aik dam sy sara ka sara ajata hai. Likin apko tou ChatGPT jesa response chahye ky jesy, jesy LLM response generate krta jaye wo apko display hota jaye.
    Tou bhai sahab vercel ny apka ye issue is dependency ki madad sy solve kr dya hai. Ye aur bhi bohat kaam kr skti hai, jinko mein anay walay blogs mein explain krun ga.

Frontend

Ab hum banaein gy apni app ka interface, technically humein kuch nhi krna sb kuch vercel ny pehly hi kr dya hai, kesay?

'use client';
import { useChat } from "ai/react";

export default function Chat(){
const {messages, input, handleInputChange, handleSubmit} = useChat();
return (
<>
/* -------1
<div className="flex flex-col w-full max-w-md py-24 mx-auto stretch">
{messages.map(m => (
<div key={m.id} className="whitespace-pre-wrap">
{m.role === 'user' ? 'User: ' : 'AI: '}
{m.content}
*/
</div>
))}
/*---------2
<form onSubmit={handleSubmit}>
<input
className="fixed bottom-0 w-full max-w-md p-2 mb-8 border border-gray-300 rounded shadow-xl"
value={input}
placeholder="Say something..."
onChange={handleInputChange}
/>
</form>
*/
</div>
</>
)
}

use client: Mostly Form ky ander data client add krta hai, jiski wajah sy humein ‘use client’ directive use krna hota hai.

useChat: Ye aik hook hai joky vercel by default provide krta hai, aur ye apko mainly 4 cheezein return krta hai:

  1. messages: Ye wo message hai joky LLM response ki surat mein return kr rha hai.
    Humein “messages.map” sy ye pta chal gya ky ye aik Array of Objects (
    [
    { “role”: “user”, “content”: “Hello, how are you?” },
    { “role”: “bot”, “content”: “I am an AI, I don’t have feelings, but I am here to assist you.” },
    ]
    Its just an illustration.)
    kuch is tarah return kr rha hai.
  2. input: Joky LLM ko as a prompt ja rha hai.
  3. handleInputChange: Jesay jesay form mein prompt input ho rha hai, ye usko handle kr rha hai.
  4. handleSubmit: Jesay hi form submit hoga tou ye method by default “api/chat” ko call kry ga.
    “api/chat” kya hai, fikar na kr jani agay arha hai.

Tou 1 is the place, jahan LLM ka response aur user ka prompt display ho rha hai.
2 is the place, jahan user prompt input kr rha hai.

Backend

Apkay frontend ny “api/chat” pr request to krdi likin bhai api tou banaya nhi. So, ab apnay app ky folder mein ja kr api ka folder banana hai. Aur phir usky ander chat ka folder banany ky baad usmein route.ts ki file banani hai. Apka folder structure kuch is tarah ka dikhy ga.
“app/api/chat/route.ts”

# folder structure

app (folder)
- api (folder)
-- chat (folder)
--- route.ts (file)

usky ander ye likhna hai,

import { createGroq } from "@ai-sdk/groq";
import { streamText,convertToCoreMessages } from "ai";

* --------1
const groq = createGroq({
apiKey: `${process.env.GROQ_API_KEY}`,
baseURL: "https://api.groq.com/openai/v1"
})
--------*

*---------2
export async function POST(req:Request){
const {messages} = await req.json();
const result = await streamText({
model: groq("llama-3.2-11b-vision-preview"),
messages: convertToCoreMessages(messages),
system:"You are a helpful assistant",
)}
return result.toDataStreamResponse();
}
-------------*

createGroq: Ismein hum apna custom groq model banaein gy. Nhi train nhi krein gy bs kuch default settings ko change kr skein gy.

streamText: Ye hum ChatGPT type response ky lye use krein gy. Jesy jesy LLM response generate krta jaye ga, ye client side pr response render krwata rahay ga.

convertToCoreMessages: Ye hr tarah ky input messages (yani prompt) ko LLM compatible format (jismein LLM baat samajh sakay) mein LLM ko bheijta hai.

1- createGroq: Yahan pr aap apnay groq model ko custom parameters pass kr skty hain, jismein aap apni API key aur base URL (aur bhi params hain, likin mein sirf yehi bta rha hun) pass kr skty hain.

2- POST func: Prompt aur uski history joky frontend sy arhay hain, first hum usko messages ky var mein store krwa rhy hain then, then model ko convertToCoreMessages ka method use krky send kr rhy hain.
LLM sy jo bhi response aye ga usko frontend pr stream kr dein gy.

That’s it. Apnay server ko run krein, aur apka chatbot tayar hai. Joky kuch is tareeky ka dikhy ga

Working of the Chatbot

That’s it!

Aglay blog mein miltay hain, Python aur TypeScript mein kuch zabardast krty hain.

English

Before Starting, you should have a Next Js app up. If you don’t have a Next Js app, you can make it from here.

Choosing LLM

After building a Next Js app, you have to choose the LLM. Vercel has a list of LLM’s available, that is here

https://sdk.vercel.ai/providers/ai-sdk-providers

GROQ API

For this blog, I’ll be using GROQ LLM’s. To get its API, you just have to sign up at GROQ, and generate its API. Like this

Creating API key in Groq cloud

Next Js

Let’s get back to our frontend. As we are using, GROQ so, we have to install some dependecies to use it in out Next App. These dependencies are:

# pnpm
pnpm add @ai-sdk/groq ai

# npm
npm i @ai-sdk/groq ai

# yarn
yarn add @ai-sdk/groq ai

# Note: Agar aap koi aur LLM jesay ky mistral use kr rhy hain,
# tou uska syntax kuch ye hai

# pnpm
pnpm add @ai-sdk/mistral ai

# npm
npm i @ai-sdk/mistral ai

# yarn
yarn add @ai-sdk/mistral ai

Let’s understand one by one, which dependency is for what purpose?

ai-sdk/groq: This dependency will allow us to use GROQ LLM’s.

ai: If you have ever connected a LLM to your app, you would have noticed that, all the content is returned at once but, you wanted a response like ChatGPT. As the LLM is generating the response, it is returning at the frontend.
So, vercel has got you covered with this dependency. Though it can offer more than just that. But, I’ll cover that in future blogs.

Frontend

Now we’ll build the interface of our app. Technically we don’t have to do anything, the vercel has done that for us, how?

'use client';
import { useChat } from "ai/react";

export default function Chat(){
const {messages, input, handleInputChange, handleSubmit} = useChat();
return (
<>
/* -------1
<div className="flex flex-col w-full max-w-md py-24 mx-auto stretch">
{messages.map(m => (
<div key={m.id} className="whitespace-pre-wrap">
{m.role === 'user' ? 'User: ' : 'AI: '}
{m.content}
*/
</div>
))}
/*---------2
<form onSubmit={handleSubmit}>
<input
className="fixed bottom-0 w-full max-w-md p-2 mb-8 border border-gray-300 rounded shadow-xl"
value={input}
placeholder="Say something..."
onChange={handleInputChange}
/>
</form>
*/
</div>
</>
)
}
Don’t worry, I’ll explain it

use client: The data in the form is filled by the user/client. That’s why we have to use the “use client” directive.

useChat: It is a hook provided by vercel that returns 4 things:

  1. messages: It is a list of those messages that are returned by the LLM in the form of response.
    We came to know that message is an Array of Objects but, what is the format so, this is actually the format (
    [
    { “role”: “user”, “content”: “Hello, how are you?” },
    { “role”: “bot”, “content”: “I am an AI, I don’t have feelings, but I am here to assist you.” },
    ]
    Its just an illustration.)

input: The prompt to the LLM.

handleInputChange: As the user is entering the prompt, it is capturing it.

handleSubmit: As the form is submitted, this method will call the “api/chat” by default.
What is “api/chat”? Don’t worry dear it’s coming ahead.

So, 1 is the place where the user message, and the LLM response is being displayed.

2 is the place where the user is entering the prompt.

Backend

Your frontend has put a request at “api/chat” endpoint but, that doesn’t exist. So, let’s make it.
Make a folder named, api inside the app folder, now make a chat folder inside the api folder. Inside the chat folder make a file named route.ts. Your folder structure will look like this:
“app/api/chat”

# folder structure

app (folder)
- api (folder)
-- chat (folder)
--- route.ts (file)

Inside the route.ts file, add this

import { createGroq } from "@ai-sdk/groq";
import { streamText,convertToCoreMessages } from "ai";

* --------1
const groq = createGroq({
apiKey: `${process.env.GROQ_API_KEY}`,
baseURL: "https://api.groq.com/openai/v1"
})
--------*

*---------2
export async function POST(req:Request){
const {messages} = await req.json();
const result = await streamText({
model: groq("llama-3.2-11b-vision-preview"),
messages: convertToCoreMessages(messages),
system:"You are a helpful assistant",
)}
return result.toDataStreamResponse();
}
-------------*

createGroq: We’ll be making our own model here. I mean we’ll be changing some default settings here.

streamText: We’ll be using this for ChatGPT type response. As the LLM starts generating the text, it’ll start rendering it on the frontend.

convertToCoreMessages: The purpose of this method is to convert the prompt, into LLM compatible format (in which a LLM can understand it).

1- createGroq: You can pass multiple params here to change the default settings of your model, currently I am using apiKey, and baseURL only.

2- POST: Here we are extracting messages sent from the frontend, and then passing it to the model. As the model starts returning the response, that will be streamed to the frontend.

That’s it. Your chatbot is ready. Run your server to start using it like this,

Working of the Chatbot

That’s it!

See you in the next blog with some more exciting work on TypeScript & Python.

--

--

Athar Naveed

Carving a path for junior devs so that they can easily navigate through the hurdles that made my learning experience challenging. Also loves Gardening!