In this workshop you'll be taking on a couple more advanced challenges to really flex those AI muscles you've been building over the last few days.
- Creating an evaluator-optimizer loop
- And showing sources in the frontend
The evaluator-optimizer loop
Really what we've been calling our agent is actually simply an if-else statement.
- If we have enough information, we answer the question.
- If we don't have enough information, we search.
So this is feeling less to me like an agent, and more like an evaluator. The description of an evaluator-optimizer loop from Anthropic's Building Effective Agents article feels apt:
One LLM call generates a response while another provides evaluation and feedback in a loop.
In this lesson, you will embrace that design, and optimize around it.
Showing Sources In The Frontend
The current problem with our setup is that we don't receive anything from the frontend until an action is taken:
One thing I really like from observing other DeepResearch implementations is the way they display their sources. They're often displayed as a list of cards, with a favicon, title and snippet.
In this lesson we will be copying that pattern.