Understanding Your Results
You’ve collected baseline data, shipped your feature, and added post-release measurements. Now you’re looking at the results and asking the most important question: Is this change real, or just noise?
VEKTIS answers that question by comparing your post-release data against your baseline — essentially asking “is this result outside the range of what we’d normally expect?”
The before-and-after comparison
Section titled “The before-and-after comparison”Every impact calculation works the same way:
- Baseline average — VEKTIS calculates the average of all your baseline data points. This is your “normal.”
- Normal range — Using the spread in your baseline data, VEKTIS calculates how much the metric normally bounces around day-to-day.
- Comparison — Each post-release measurement is compared against that normal range. The further outside the range, the more confident VEKTIS is that something real happened.
Think of it like a weather forecast: if the temperature is usually between 60°F and 80°F, a reading of 72°F is unremarkable. But a reading of 95°F? That’s clearly unusual.
Impact scores
Section titled “Impact scores”VEKTIS classifies every post-release result into one of three impact zones based on a 0–100 score:
| Zone | Score range | What it means | What to do |
|---|---|---|---|
| Signal Detected | 0–39 | Within the normal range — could be regular fluctuation | Keep measuring — not enough evidence yet |
| Confirmed Impact | 40–69 | Outside the normal range — likely a real change | Promising signal — a few more data points will confirm |
| Significant Impact | 70–100 | Well outside the normal range — strong evidence this is real | Act on it — this almost certainly reflects a real change |
How to read impact indicators
Section titled “How to read impact indicators”Impact indicators combine the zone with the direction relative to your target:
| Indicator | Meaning |
|---|---|
| Significant Impact (green) | Strong evidence of impact in the direction you wanted |
| Significant Impact (red) | Strong evidence of change, but opposite your target |
| Confirmed Impact (green) | Likely a real change in the right direction |
| Confirmed Impact (red) | Trending opposite of your target — worth investigating |
| Signal Detected (gray) | No clear signal yet — keep measuring |
Interpretation messages
Section titled “Interpretation messages”Along with the score, VEKTIS provides plain-language interpretation of each result:
- “Looks like normal fluctuation” — The result is within the expected range. Nothing unusual here.
- “Something might be happening” — The result is notable but not conclusive. Keep measuring.
- “This looks real” — The change is clearly outside the normal range. You can start making decisions based on this.
- “Strong signal — this isn’t random” — Very strong evidence. This is almost certainly a real change, not noise.
Why baseline strength matters
Section titled “Why baseline strength matters”The quality of your results depends directly on the quality of your baseline data. VEKTIS needs a clear picture of “normal” to detect abnormal.
| Baseline strength | Data points | Effect on results |
|---|---|---|
| Insufficient | 0–1 | Can’t calculate anything — need at least 2 data points |
| Weak | 2 | Results are possible but rough — the “normal range” is estimated from very little data |
| Moderate | 3 | Decent results, but more data makes them more reliable |
| Strong | 4+ | Reliable results — VEKTIS has a clear picture of normal fluctuation |
Direction matters
Section titled “Direction matters”When you create a dev item, you set a target direction — whether you expect the metric to increase or decrease. VEKTIS uses this to color-code results:
- Change in your target direction → Green indicator (this is what you wanted)
- Change opposite your target → Red indicator (something may need attention)
- No clear change → Gray indicator
For example, if your target is to decrease page load time and the metric drops significantly, you’ll see a green Significant Impact indicator — even though the number went down.
The delta is simply how much the metric changed: the difference between your post-release measurement and your baseline average.
- Positive delta with an “increase” target → moving in the right direction
- Negative delta with a “decrease” target → moving in the right direction
- The magnitude of the delta alone doesn’t determine impact score — what matters is whether it’s outside the normal range
Tips for reliable results
Section titled “Tips for reliable results”- Collect enough baseline data — 4+ data points is the sweet spot
- Space out your measurements — Weekly data points capture natural variation better than daily ones crammed into a few days
- Keep measuring after release — One post-release data point is a snapshot; 3–4 reveal a trend
- Use consistent measurement methods — If you measure conversion rate one way for baseline, measure it the same way after release
- Don’t cherry-pick — Record all your measurements, not just the good ones