Skip to content

Understanding Your Results

You’ve collected baseline data, shipped your feature, and added post-release measurements. Now you’re looking at the results and asking the most important question: Is this change real, or just noise?

VEKTIS answers that question by comparing your post-release data against your baseline — essentially asking “is this result outside the range of what we’d normally expect?”

Every impact calculation works the same way:

  1. Baseline average — VEKTIS calculates the average of all your baseline data points. This is your “normal.”
  2. Normal range — Using the spread in your baseline data, VEKTIS calculates how much the metric normally bounces around day-to-day.
  3. Comparison — Each post-release measurement is compared against that normal range. The further outside the range, the more confident VEKTIS is that something real happened.

Think of it like a weather forecast: if the temperature is usually between 60°F and 80°F, a reading of 72°F is unremarkable. But a reading of 95°F? That’s clearly unusual.

VEKTIS classifies every post-release result into one of three impact zones based on a 0–100 score:

ZoneScore rangeWhat it meansWhat to do
Signal Detected0–39Within the normal range — could be regular fluctuationKeep measuring — not enough evidence yet
Confirmed Impact40–69Outside the normal range — likely a real changePromising signal — a few more data points will confirm
Significant Impact70–100Well outside the normal range — strong evidence this is realAct on it — this almost certainly reflects a real change

Impact indicators combine the zone with the direction relative to your target:

IndicatorMeaning
Significant Impact (green)Strong evidence of impact in the direction you wanted
Significant Impact (red)Strong evidence of change, but opposite your target
Confirmed Impact (green)Likely a real change in the right direction
Confirmed Impact (red)Trending opposite of your target — worth investigating
Signal Detected (gray)No clear signal yet — keep measuring

Along with the score, VEKTIS provides plain-language interpretation of each result:

  • “Looks like normal fluctuation” — The result is within the expected range. Nothing unusual here.
  • “Something might be happening” — The result is notable but not conclusive. Keep measuring.
  • “This looks real” — The change is clearly outside the normal range. You can start making decisions based on this.
  • “Strong signal — this isn’t random” — Very strong evidence. This is almost certainly a real change, not noise.

The quality of your results depends directly on the quality of your baseline data. VEKTIS needs a clear picture of “normal” to detect abnormal.

Baseline strengthData pointsEffect on results
Insufficient0–1Can’t calculate anything — need at least 2 data points
Weak2Results are possible but rough — the “normal range” is estimated from very little data
Moderate3Decent results, but more data makes them more reliable
Strong4+Reliable results — VEKTIS has a clear picture of normal fluctuation

When you create a dev item, you set a target direction — whether you expect the metric to increase or decrease. VEKTIS uses this to color-code results:

  • Change in your target direction → Green indicator (this is what you wanted)
  • Change opposite your target → Red indicator (something may need attention)
  • No clear change → Gray indicator

For example, if your target is to decrease page load time and the metric drops significantly, you’ll see a green Significant Impact indicator — even though the number went down.

The delta is simply how much the metric changed: the difference between your post-release measurement and your baseline average.

  • Positive delta with an “increase” target → moving in the right direction
  • Negative delta with a “decrease” target → moving in the right direction
  • The magnitude of the delta alone doesn’t determine impact score — what matters is whether it’s outside the normal range
  1. Collect enough baseline data — 4+ data points is the sweet spot
  2. Space out your measurements — Weekly data points capture natural variation better than daily ones crammed into a few days
  3. Keep measuring after release — One post-release data point is a snapshot; 3–4 reveal a trend
  4. Use consistent measurement methods — If you measure conversion rate one way for baseline, measure it the same way after release
  5. Don’t cherry-pick — Record all your measurements, not just the good ones