2026-01-22 · Mika Alvarez
Instrumenting mentor feedback without turning coaches into bots
mentorship · systems
Operational metrics can poison culture when they chase vanity. We only track whether a learner received a first-pass review within the promised window and whether the review referenced at least one concrete file path. Those two checks protect quality without gamifying kindness.
Mentors opt into anonymized aggregates for their own improvement. We never stack-rank humans on a leaderboard. Instead, we surface blocked queues so staffing can flex during crunch weeks.
When a mentor needs to step away for caregiving, we swap ownership explicitly in the thread so learners never ping the void. That tiny protocol reduced duplicate questions more than any autoresponder.
We are still experimenting with lightweight tagging so learners can filter feedback by topic, but tags remain optional to avoid performative labeling.