§ 04 / Field note · 2026-05-05

AI as mirror, not advisor

The design choice in NBD to use AI for reflection-surfacing rather than recommendation.

The default AI design move in career tools is recommendation: "based on your profile, you should consider these careers." It is the obvious move. The data is there, the model is good at fitting, the user expects an output.

The lab's design move in NBD is the opposite: AI as mirror. The system surfaces patterns in what the student has written, without telling them what to do about it.

Concretely, a student writes a paragraph about themselves, completes a card sort for values and strengths, and names a single word at the end of the workshop. The AI layer reads all three artifacts and says back what it sees, for example, that the values they ranked highest don't appear in their paragraph, or that the single word they chose conflicts with the strength they picked first. The AI does not recommend; it reflects.

The difference matters because the two modes work toward different ends. Recommendation displaces the student's judgment, produces an answer, closes the question, fits the throughput frame. Mirroring sharpens the student's judgment, produces a question, opens the inquiry, fits the learning frame. The lab is committed to the second, which means resisting the much easier first move on every NBD design decision.

The technical challenge is not producing recommendations. Any modern model can do that. The challenge is producing observations that are accurate, uncomfortable, and not prescriptive. That is a different problem. It requires the system to say "here is a gap" without saying "here is what you should do about the gap."

The library connection. Mezirow's transformative learning theory (in the library) hinges on a "disorienting dilemma", a productive gap between how you see yourself and what the evidence shows. The disorientation is what triggers the work of reframing. AI is good at producing those gaps if you ask it to. It is also good at smoothing them over with a recommendation, if you don't.

Open thread. The risk: AI-as-mirror is easy to slide into AI-as-judge. The line between "here is a gap in your evidence" and "you are not as values-aligned as you think" is thin. The lab has started thinking about the difference between an observation and a verdict, and what design moves keep the system on the observation side. There is a longer note in this, possibly the central design question of RQ 4.

← All field notes