How this feature connects to others
What the Pivot Recommender actually does
A pivot is not any change whatsoever. It is a meaningful change to your customer segment, problem focus, positioning, channel strategy, or solution direction because the evidence says your current assumptions are weak.
The Pivot Recommender exists for the moment after you have done the hard work of customer discovery and you need to decide what the evidence actually means. Founders are usually good at collecting anecdotes. They are much less consistent at weighing those anecdotes against the specific assumptions their business depends on.
Zigzag turns that judgment into a structured analysis. It compares the customer responses you have collected with the critical hypotheses already attached to your project and estimates how much risk still sits in your highest-priority assumptions.
Why this matters after survey responses start coming in
Most founders do not struggle because they ignore all feedback. They struggle because they interpret feedback selectively. One enthusiastic interview can feel decisive. One skeptical respondent can feel catastrophic. Neither is a reliable basis for strategy on its own.
The right question is not whether some people liked your idea. The right question is whether the assumptions that matter most to your business are being validated strongly enough to justify continuing on the same path. If your most important hypothesis is still weak after real customer conversations, that matters more than ten encouraging comments about secondary details.
This is why the Pivot Recommender only becomes useful after you have completed real responses. It is designed to sit on top of evidence, not replace it. The goal is not to create drama. The goal is to help you distinguish between a normal iteration, a warning sign, and a genuine strategic turning point.
How zigzag calculates pivot risk
The pivot risk score is the quantitative part of the customer feedback analysis. It is derived entirely from the completed customer responses your team has collected. Alongside this score, zigzag also provides qualitative customer feedback: for every question in your interview guide and for every hypothesis in your validation framework, the platform surfaces a written summary of what respondents actually said — so you can read the evidence behind the number directly.
The calculation starts from the priority score attached to each of your critical hypotheses. Each priority score equals the hypothesis’s risk rating multiplied by its impact rating, both assessed on a one-to-ten scale when the hypotheses were generated. A hypothesis rated nine for risk and nine for impact has a priority score of eighty-one; one rated three and three has a priority score of nine. High-priority hypotheses carry proportionally more weight in the final result.
Zigzag’s AI then reads all completed customer responses and assigns each hypothesis a validation score on a zero-to-ten scale — zero meaning the hypothesis is clearly contradicted by what respondents said, ten meaning it is strongly confirmed. The exact formula used to combine all of this into a single Pivot Risk Score is:
Formula
Pivot Risk Score (%) = ( Σᵢ pᵢ × (1 − vᵢ / 10) ) / ( Σᵢ pᵢ ) × 100
What each variable means
- ✓pᵢ — the priority score of hypothesis i, equal to its risk rating × its impact rating (each independently rated 1–10, so pᵢ ranges from 1 to 100)
- ✓vᵢ — the AI-assessed validation score for hypothesis i based on all customer responses, on a scale of 0 to 10. A score of 0 means the hypothesis is fully contradicted by the evidence; a score of 10 means it is fully confirmed.
- ✓(1 − vᵢ / 10) — converts validation strength into a rejection weight. A fully validated hypothesis (vᵢ = 10) contributes zero pivot risk. A fully invalidated hypothesis (vᵢ = 0) contributes its entire priority weight. Everything in between scales linearly.
- ✓The sum of all weighted rejection contributions is divided by the total priority score across all hypotheses, then multiplied by 100. The result is a percentage: 0–29% means your critical assumptions are mostly holding up; 30–49% indicates meaningful weakness worth monitoring closely; 50% and above crosses the pivot threshold and is strong evidence the current direction deserves serious rethinking.
What you see in the product
When available, the Pivot Recommender appears at the top of the survey analytics experience as a dedicated Pivot Risk Analysis card. It shows the overall risk score, a recommendation band, and a short plain-language explanation of what that score implies.
If you expand the analysis, you can see the hypothesis-by-hypothesis breakdown. Each hypothesis is labeled with its validation score, an explanation of the reasoning, and its contribution to overall pivot risk. That matters because not all weak signals are equal. The most important insight is often which single assumption is driving the majority of the risk.
The full analysis is available on paid plans and can be re-run as more responses come in. Zigzag also caches the latest result so you are not starting from zero every time you reopen the dashboard.
How to interpret a pivot signal without overreacting
A pivot recommendation is not an instruction from the machine. It is a prompt to think more rigorously. You still need to look at the underlying responses, the hypothesis rationales, and the actual shape of the weakness being identified.
Sometimes the right move is a full pivot: a new customer segment, a different problem emphasis, or a narrower solution. Sometimes the right move is more modest: refine your positioning, change the order in which you build features, or run a better-targeted next round of interviews.
What matters is that you do not treat all negative feedback as equal. If the analysis shows that your top-priority willingness-to-pay hypothesis is weak, that deserves a different response from a secondary feature preference that customers simply found unexciting.
What to do after you run it
If the score is low, keep going - but keep collecting evidence. Low pivot risk does not mean perfect certainty. It means your most important assumptions are currently holding up well enough to continue refining and building.
If the score lands in monitor territory, tighten the next round of validation around the weakest high-priority hypotheses. Ask better follow-up questions, recruit a better-targeted set of respondents, and look for clearer evidence rather than jumping straight into rebuilding the business.
If the score crosses the pivot threshold, go back to your Lean Canvas and update the sections that no longer match reality. That is exactly where the Consistency Checker becomes useful: once you change the core story, it helps you identify which downstream sections and assets now need to be brought back into line.