For companies · Core concept
IntermediateGuild-backed vetting
What happens between the moment you post a job and the moment a shortlist lands on your dashboard — and how to get the best signal out of it.
Last updated
The underlying principle
Traditional hiring pipelines rely on keyword-matched resume screens or, increasingly, AI parsers. Both are noisy: they filter on signals that correlate weakly with actual performance in the role. The top of your funnel ends up full of candidates who pattern-match on surface features but fail at the first technical interview.
Guild-backed vetting replaces that screen with structured, independent evaluation by domain experts who actually do the work. Every application is read by multiple experts, scored against a published rubric, and aggregated into a single consensus score. You receive a shortlist that's been pre-filtered by people whose judgement you would otherwise have to spend months building a relationship with before you could trust it.
Writing screening questions
Screening questions are the single most important thing you control. The guild rubric is set by the guild, but the content the experts are evaluating comes from your questions. Good questions produce high- signal reviews; bad questions waste everyone's time.
What makes a screening question good:
- It maps to a real skill you care about. Not a proxy — the actual skill. If you care about system design, ask a system design question. Don't ask about programming languages as a stand-in.
- It requires a substantive answer. Questions that can be answered in one sentence give reviewers nothing to score. Aim for answers in the 100–300 word range.
- It's grounded in a specific scenario. "Describe a time you..." questions produce better signal than "How would you..." questions because reviewers can check the answer against real experience.
- It's unique to your role. Generic questions get generic answers. The more your question reflects the specific challenges of the role, the higher-signal the reviews.
What the consensus score actually means
Each candidate ends up with a single consensus score on a 0–100 scale. That score is not a raw average — outlier votes are first filtered out using IQR statistics, and then the average of the remaining scores becomes the consensus. The guild approval threshold is 60 — candidates at or above this score are approved and appear on your shortlist.
Within your shortlist, higher scores indicate stronger consensus among reviewers. Use score differences directionally — a candidate at 82 versus 76 tells you something, but don't over-fit to small differences. Read the aggregated expert feedback for candidates near the threshold.
The scores are relative to the guild's rubric, not to your personal bar. A guild with a very high baseline (e.g. the Engineering guild) will have fewer 90+ scores than a guild with a lower baseline.
Reading endorsements
Endorsements appear on the shortlist as a separate signal from the consensus score. A candidate can be strongly endorsed with a middling consensus score, or have a high consensus score and no endorsements — each combination tells you something different.
- High score + strong endorsements. The safest candidate to prioritise. Multiple experts are willing to stake real VETD on this candidate's success.
- High score, no endorsements. Still a strong candidate — the consensus is favourable, experts didn't have standout conviction. Very common and not a negative signal.
- Medium score, strong endorsement. One or more experts saw something the panel as a whole didn't score highly. Worth a closer look; often candidates in this bucket are strong on dimensions the rubric underweights.
Making decisions from the shortlist
The shortlist is ordered by consensus score by default, but the dashboard lets you re-sort by endorsement strength, recency, or candidate seniority. Use whichever view matches the role's priority.
A few practical tips:
- Don't re-screen on resumes. The whole point of vetting is to replace the resume screen. If you go back to pattern-matching resumes on the shortlist, you lose the value.
- Read the aggregated feedback for borderline candidates. Candidates near the approval threshold are exactly where feedback is most useful — experts often wrote specific observations that explain their scores.
- Interview broadly from the top of the shortlist. A consensus score of 82 isn't meaningfully different from a score of 86 — both are strong signals. Don't over-fit to the top of the ranking.
The feedback loop
When you mark a candidate as hired or rejected, that decision feeds back into the platform. Experts who scored the candidate highly (and endorsed them, if applicable) see their reputation and reward move. Over time, this produces a measurably better match between guild consensus and your actual hiring outcomes.
You can also post short feedback when marking a candidate as hired — "strong on system design, weaker on communication than the scores suggested," for example. This goes into the guild's calibration data and helps experts tune their own reviews for your specific role type.