How this feature connects to others
What the Survey Analytics Dashboard is for
The Survey Analytics Dashboard is the working area that opens once you start receiving survey responses. It is not a separate validation philosophy from the Survey Analysis guide. It is the product surface where zigzag organizes the evidence you have collected so you can make decisions from it.
That distinction matters because many founders understand, in principle, that they should analyze responses carefully, but they still end up doing the analysis informally in their heads. They read a few answers, remember the emotionally memorable ones, and lose sight of the full set. The dashboard exists to stop that from happening.
In practice, it brings together response health, question-level patterns, hypothesis-level interpretation, and individual customer answers in one place. As more responses arrive, the view refreshes automatically, so you are working from the current state of the evidence rather than a one-time export.
How to read the top of the dashboard
At the top, zigzag surfaces the broadest signals first. The Pivot Recommender appears above everything else when you have completed responses, because the highest-level question is whether your evidence supports continuing on the current path or whether important assumptions are still weak. Under that, the stats cards show how healthy the response process itself is: how many surveys you sent, how many were completed, the completion rate, and the average completion time.
These metrics are more important than they look. A low completion rate can indicate that the wrong people were invited, that the survey is too long, or that the questions are not engaging enough to finish. A very short completion time can mean respondents are skimming rather than thinking carefully. Before you draw big strategic conclusions, make sure the data collection itself looks believable.
This top section helps you separate two different problems: poor evidence collection versus poor evidence outcomes. If almost nobody completed the survey, you may need a better validation process before you need a pivot. If many people completed it and the results are still weak, that is a more meaningful signal.
Using the hypothesis analysis panels
Below the summary cards, the dashboard connects customer evidence back to the critical hypotheses you defined earlier. Each hypothesis card shows the statement being tested, its context such as validation method or success criteria, and any AI-generated analyses that have already been produced from the responses.
This is where the dashboard becomes more than a reporting screen. Instead of only seeing that customers answered Question 4 in a certain way, you can ask the more useful question: what does that answer pattern say about Hypothesis 2 or Hypothesis 5? Zigzag also shows which hypotheses are contributing most to pivot risk once pivot analysis has been run, which helps you identify the specific assumptions that deserve attention first.
Use this section to avoid vague conclusions like "people seemed interested" or "feedback was mixed." The right output from validation is a sharper statement such as "our pricing willingness-to-pay hypothesis is still weak" or "customers consistently confirmed the urgency of the workflow problem." That level of precision is what makes later product and brand decisions defensible.
Why the question-by-question view matters
The dashboard also breaks the survey down question by question. For each prompt, you can expand the card to see the exact wording, the purpose of the question, how many people answered it, which hypotheses it was testing, and the individual responses themselves.
This is useful because poor validation is often a question-design problem rather than a market problem. If one question produces scattered, low-value answers while another produces clear and repeated patterns, that tells you something about how well the question framed the issue. The dashboard makes that visible without forcing you to piece it together manually.
It also helps with follow-up work. If one question surfaces surprising pain points or language you had not heard before, you can take that insight straight back into interviews, Lean Canvas edits, or revised hypotheses. Good validation is iterative, and this view helps you see exactly where the next iteration should focus.
Using the customer response table properly
At the bottom of the dashboard, zigzag keeps the raw material visible. The Customer Responses table shows who replied, their company or industry context when available, and whether each survey is pending, sent, in progress, or completed. Completed responses can be expanded so you can inspect answers question by question for a specific person.
That matters because aggregate patterns are powerful, but they can hide nuance. Sometimes a response strongly challenges your assumptions even if it is not yet the majority pattern. Sometimes one segment is reacting very differently from another. Looking at individual transcripts lets you test whether the summary pattern is actually telling the full story.
A good habit is to move back and forth between aggregate and individual views. Use the dashboard to spot patterns, then open specific responses to understand why those patterns are showing up. The combination is stronger than either view alone.
How to use the dashboard as a decision tool, not a report
The dashboard is most useful when you treat it as a working decision surface rather than a place to admire charts. Once you can see which hypotheses are strongest, which questions were most revealing, and which responses are challenging your assumptions, the next step is to update something: your validation framework, your Lean Canvas, your messaging, or your next round of research.
In zigzag, this usually means moving from evidence into action. If the results support your direction, you can carry those findings into Customer Discovery Findings, Brand Elements, and later MVP Requirements. If the results are mixed or weak, the Pivot Recommender and Consistency Checker help you decide whether to change course and how far that change needs to propagate through the project.
The point of the dashboard is not to make validation look polished. It is to make it usable. If you leave the page with a clearer idea of what has been confirmed, what remains uncertain, and what should happen next, you are using it correctly.