Okay, this is what the solution looks like. We are instantiating google from @ai-sdk/google, and inside here we get auto-complete on all of the available models that the AI SDK knows about. Let's take a look with gemini-2.0-flash-lite. We then have this prompt that we pass into generateText with the model. Note how we're awaiting generateText here because result is a Promise, right?
import { google } from '@ai-sdk/google';import { generateText } from 'ai';const model = google('gemini-2.0-flash-lite');const prompt = 'What is the capital of France?';const result = await generateText({model,prompt,});console.log(result.text);
So with generateText, we ask the LLM to generate some text and we wait for the result. When we run this, we can see that we get the output:
The capital of France is Paris.
Now of course, we're using @ai-sdk/google here, but we could use @ai-sdk/anthropic; there are a bunch of different packages. And it means that instantiating the model and changing the model, crucially, is just a single line of code. So vendor lock-in with the AI SDK becomes a thing of the past.
You really are able to just write your code and then think about the model later. Or rather, not have to worry that you're chaining yourself to a model's API that you'll later need to shift out.
Nice work, and I will see you in the next one.