---
Interim standings after SIX snowfall
forecasting Contests … as of 07-MAR-18.
Under the ‘two-thirds’ rule … forecasters who have entered at least FOUR forecasts
are included in this interim summary.
---
To qualify for ranking in the
Interim and final ‘End-of-Season’ standings … a forecaster must enter at least
two-thirds of all Contests. If a forecaster has made more than enough
forecasts to qualify for ranking … only the lowest SUMSQ Z-scores necessary to
qualify are used in the computing the average. IOW … if you made nine
forecasts … only your six best SUMSQ Z-scores are used to evaluate your
season-to-date performance. You can think of it as dropping the
worse quiz score before your final grade is determined. The reason we
have this rule is to:
1) make it possible to
miss entering a forecast or two throughout the season and still be eligible for
Interim and ‘End-of Season’ ranking and
2) encourage forecasters
to take on difficult and/or late-season storms without fear about how a
bad forecast might degrade their overall 'season-to-date' performance score(s).
---
The average normalized ‘SUMSQ
error’ is the Contest/s primary measure of forecaster performance.
This metric measures how
well the forecaster/s expected snowfall 'distribution and magnitude' for
_all_ forecast stations captured the 'distribution and magnitude' of _all_
observed snowfall amounts.
A forecaster with a lower average
SUMSQ Z Score has made more skillful forecasts than a forecaster
with higher average SUMSQ Z Score.
---
The 'Storm Total Precipitation
error’ statistic is the absolute arithmetic difference between a
forecaster/s sum-total snowfall for all stations and the observed sum-total
snowfall. This metric … by itself …is
not a meaningful measure of skill …but can provide additional insight of
forecaster bias.
---
The 'Total Absolute error'
statistic is the average of your forecast errors regardless of whether you
over-forecast or under-forecast.
This metric measures the magnitude
of a forecast’s errors.
----
The 'Average Absolute Error'
is the forecaster/s ‘Total Absolute Error’ divided by the number of
stations where snow was forecast or observed.
---
The ‘RSQ error’ statistic is
a measure of the how well the forecast captured the variability of the observed
snowfall.
Combined with the SUMSQ error statistic … RSQ provides added information about how strong the forecaster/s ‘model’ performed.