Now, we understand the importance of streaming text and how to convert that text stream from the LLM into a UIMessageStream for the frontend. Let's look at how to implement this in our app.
We have a small Vite app with a root component in client/root.tsx. Our first task is to use the useChat hook to get the messages and sendMessage function, which will connect to pre-built components for message rendering and chat input.
We need to implement the TODO in our App component in client/root.tsx:
import { useChat } from '@ai-sdk/react';const App = () => {// TODO: use the useChat hook to get the messages and sendMessage functionconst { messages, sendMessage } = TODO;const [input, setInput]
After setting up the frontend, we need to work on the API route in api/chat.ts. This POST route will be called when the user sends a message, sending the entire history of all messages collected so far.
The API route has several TODOs to complete:
export const POST = async (req: Request): Promise<Response> => {const body = await req.json();// TODO: get the UIMessage[] from the bodyconst messages: UIMessage[] = TODO;
When examining the request in the network tab, you'll find that body.messages contains an array of UIMessages. To send these to streamText, we first need to convert them from UIMessages to ModelMessages using a function from the ai package. Check the reference material for more information.
Once all these steps are complete, you'll be able to have a full conversation with the Gemini model, not just a single call and response, but an evolving conversation where the LLM maintains context over all previous messages.
Import the useChat hook from @ai-sdk/react in client/root.tsx
Replace the TODO in the App component with the appropriate useChat({}) call
Complete the onSubmit handler in ChatInput to use sendMessage with the input text
In api/chat.ts, extract the UIMessages from the request body (replace the first TODO)
Import and use a function to convert UIMessages to ModelMessages (replace the second TODO)
Pass the ModelMessages to the streamText function by adding them to the existing configuration
Create a UIMessageStream from the streamText result (replace the fourth TODO)
Test your implementation by running the dev server and having a conversation with the AI
Check the network tab to ensure messages are being sent and streamed correctly. Notice how the UIMessageStream is being sent to the frontend.