Your application uses the following stack:
Run pnpm install to install all dependencies
Run pnpm dev to start the development server
Open localhost:3000 in your browser to see the UI
Explore what's already built: the chat interface on the left, the memories section with the plus button, and the data tab at the bottom.
data/emails.json and open itThis contains 547 pre-generated emails for a fictional person named Sarah Chen. The data includes varying lengths, topics, and automated content to feel realistic.
This lets you understand the dataset and test retrieval algorithms without calling the LLM.
app/page.tsx (the main front page)This is where the app loads chats and memories, and renders the sidebar, top bar, and chat component.
Look for functions like loadChats(), saveChats(), and createChats(). These all write to a single JSON file instead of a database.
app/api/chat/route.tsThis is the most important file in the codebase. The POST function here is where most of your work will happen.
createUIMessageStream() wrapperThis handles generating chat titles, appending messages, and persisting data.
streamText() callThis is where the LLM generates responses and writes them to the UI message stream. Currently using Gemini 2.5 Flash, but you can switch models here.
Review the sidebar component, top bar component, and chat component to see how the interface is structured.
useChat() hook is being usedThis hook manages chat state and persistence - it's what you learned about in the AI SDK crash course.
loadChats, saveChats, etc.)Understand that these currently use JSON file operations, but could easily be swapped out for Postgres or similar in production.
This will help you understand the architecture's flexibility for production deployments.