All right, I'm going to run this exercise and we should see immediately that we get a bunch of text being spat out here, probably about a planet called Xylos because it always seems to do this same one. There you go, what did I say?

The sky above Xylos was a perpetual something, swirling tapestry of deep indigo and vivid emerald.
Lovely. And then we see this array of facts here.
We can see it's an object here that has an array or a property called facts inside it which has a set of strings.

Dominant flora on Xylos exhibits a distinct crystallized structure. Prominent obsidian crags.
Very cool. The solution here then is to use generateText here, passing in the model, passing in the prompt, and then specifying Output.object here.
const factsResult = await generateText({model,prompt: `Give me some facts about the imaginary planet. Here's the story: ${finalText}`,output: Output.object({schema: z.object({facts: z.array(z.string()).describe('The facts about the imaginary planet. Write as if you are a scientist.'
Output, as we can see, is imported from 'ai' up here, this uppercase Output. And there are lots of different types of output that you can specify on generateText. You can have Output.array, which may even have been more appropriate here. You can specify Output.choice, which is the enum option here. You can specify Output.json for any random JSON or Output.object as we're doing, or Output.text, which is the default.
The cool thing about this is it gets put on factsResult.output here, and we can see that it's typed as facts: string[] here.

The way this works is this Zod schema gets converted into JSON schema and then sent up to the provider. That tells the LLM what format it's supposed to return in and then whatever the LLM returns then gets validated by this Zod schema here. So if the LLM does something weird, puts in like a number instead of a string into this array, then we're going to get an error in our application coming from the Zod schema here.
So this is the version 6 way of doing it, but there is also a version 5 way of doing it, which looks extremely similar here where we have a model, prompt, and a schema, except it's a different top-level function called generateObject instead of generateText passing in Output.object.
// The v5 wayconst factsResult = await generateObject({model,prompt: `Give me some facts about the imaginary planet. Here's the story: ${finalText}`,schema: z.object({facts: z.array(z.string()).describe('The facts about the imaginary planet. Write as if you are a scientist.',
We can see too that if we look at solution.2, it's got a line through it indicating that it's been deprecated. So it will probably be removed in AI SDK 7 whenever that comes out.
So that is how you get LLMs to return structured information. You specify an output property here and then you pass it Output.object. You can use this for doing really, really smart stuff.
And we're going to take advantage of this a lot as the course progresses. Nice work and I will see you in the next one.