Welcome to the first NFL PT Awards.  The purpose of these
awards is to help raise the publicity for those systems that
have been superior in various qualities of interest.
Most awards will be based entirely on the numbers I've
monitored with my prediction tracker web pages.  I have
followed the weekly performance of 25+ computer ratings systems
over the course of the past two seasons.  Since I didn't give
any awards last year I will mention last year's winners when


Winner: CPA Rankings, Steve Wrathell

 The simplest and possibly most important way of measuring the
predictive ability of a system is to count the number of games where
the winner is correctly predicted.  The prediction Tracker page followed
predictions on all 259 games played in the 2000 NFL season.
 I'm going to give this award to Steve Wrathell's CPA Rankings because
he picked the most games correctly for the full season.  I get to
give myself an asterisk or a honorable mention here because my
PerformanZ Ratings using a team specific home field advantage actually
had the best picking percentage.  But since I started it several weeks
into the season I have to resist the temptation to give myself this award.
 The CPA Rankings finished with a record of 172-87, 66.4%.  This was
two games ahead of ARGH and 4 games ahead of Ken Massey.  I'm sure my
new enhanced PerformanZ would have given CPA a run had I thought of it
 CPA's 172 correct games is one more than Ken Massey who
was last year's winner with 171. My PerformanZ rate of 68.4% is a little
lower than last years best of 68.9% by the scoring effeciency method.
Scoring effeciency's mark last year was also based on a partial season
(16 out of 17 weeks)


Winner: Yourlinx, Ray Waits

 Picking agains the spread needs no introduction.  This season the best
system picking against the spread was Yourlinx ratings, by Ray Waits.
He claimed not to be trying to beat the spread but that is what he did
with an excellent 57.6% against the line.  ARGH Power Ratings gets an
honorable mention with a very close second place at 57.4%.
The computer systems did much better picking against the spread in
the NFL than they did with the NCAA.  20 of of 24 systems did better
than 50%.  This year 5 systems did better than the leader did
last year.  Last year the best was PerformanZ Ratings at 54.8%.


Winner: Vegas Line

 Mean absolute error is the average of the absolute value between the game
outcome and the predicted game difference.  As in the NCAA this year, the
system with the lowest mean absolute error was the vegas spread.  The spread
really does a good job of predicting the final outcome.  No system has ever
beat it over the last two seasons for college or the pros.  However, the
systems do a better job in the NFL than they did in the NCAA.  Here several
of the systems at least come close to matching the spreads performance and
we have cases where they beat the spread over the second half of the season.
For the season as a whole the top computer system was the Pigskin Index at
10.71.  That isn't much behind the spread's 10.63.

  SMALLEST BIAS (Entire Season)

Winner: Pigskin Index

 If we take the error for each game in relation to the home team to
be the prediction minus the actual game outcome then the mean of these
errors is the bias.  A positive value would mean that the system tends
to give to many points to the home team and a negative value mean
a system tends to give to many points to the road team.  So the closer
to zero the better.
 The smallest bias over the entire season belonged to the Pigskin Index.
Which was very accurate with a bias of 0.027.  I get to give myself
another honorable mention here as PerformanZ was a very close second
at 0.048.  The superbowl actual decided the outcome of this category.
Last year this award would have been given to the Flyman Ratings,
which had a bias of -0.038.


Winner: Pigskin Index

 Mean square error is the average over all games of the error squared.
Where the error is the difference between the prediction and the game outcome.
Square error differs from the absolute error in that there is a greater
penalty the farther away the prediction is.  Mean square error also has
the cool property of being equal to the variance of the predictions
plus the bias squared.
 The smallest mean square error for the entire season goes to The Pigskin
Index at 178.2.  The vegas line had the smallest absolute error but only
managed to come in second here.

                        SECOND HALF AWARDS

 I like to look at the results over the second half of the season.  The
systems have had time to become 'burned in' to the season's data.  This is
when the systems show how good they are rather than how accurate the
preseason rankings were.  Second half data consists of all games from week
9 through the superbowl.


Winner: PerformanZ Ratings, Todd Beck
Winner: Scoring Effeciency

  Two methods tied for the most predicted winners over the second half of
the season. The first is my own PerformanZ Ratings using a team specific home
field advantage.  This system was only an experiment this year to test whether
it was better than using a uniform home field advantage.  It turned out quite a bit
better, so expect to see this version replace my old standard next season.
Sometime in the off season I will add the HFA's to the website.
The second system is more of a method of predicting outcomes than a system.
I based it on a technique that some professional gamblers use.  I call it
scoring effeciency because in it's simplest form it would be based on points
per yards of offense/defense.
  Both of these systems were at 67.6% over the second half.  They lead by a
relatively large margin of 4 games over the nearest competition.
The going was a little harder than last year when 3 systems, Ken Massey, Ed Kambour
and the Pythagorean Ratings all were 69.7%.


Winner: Pythagorean Ratings

  The system that beat the spread the most time over the second half of the
season was the Pythagorean ratings at 56.5%.  These ratings are another system
that I maintain.  They are based on Bill James' pythagorean theory of baseball.
points for squared devided by(points for squared + points against squared).
This quanity give a teams expected winning percentage.  As you can see from my
site this can be very accurate, often matching a teams actual winning percentage.
  Last season the best against the spread for the second half was least squares
regression using a team specific home field, it was 56.2%.


Winner: Pythagorean Ratings

  See an explanation of the Pythagorean ratings above.  Pythagorean's average
absolute error was about 10.46 per game.  This is actually significantly lower than
the second place system at 10.69 and considering half the systems had values over 11.
The values for the second half actually tend to be larger than they are for the full
season.  So in this case our predictions don't seem to be improving as the year
goes on, they are getting worse.
  I don't have the numbers but the Pythagorean ranking led in the category last year
as well.

  SMALLEST BIAS (Entire Season)

Winner: Flyman Performance Ratings

  I think the results in the category reflect a quirk of the NFL 200 schedule.
Almost all of the values are between -1 and -2, meaning on average too few points
were given to the home team.  The road teams got off to a very strong start in
2000.  Some systems even had negative home field advantages very early in the
season.  The home teams then started balancing things out in the second half of
the season.  The Flyman had the only second half bias that was under 1, with a
value of -.50.  This is a result of his use of a global home factor of 4,
compared to many using 3.
  Looking at last year's number I was surprised to see the same pattern.  We also
had the same winner last season, as The Flyman's second half bias was a good
two points better than anyone elses at -.74,


Winner: Pythagorean Ratings

  The pythagorean method picks up another second half award.  Has anyone ever seen
Bill James write about this as applied to football?  The football pythagorean theory
produces the lowest mean square error over the second half of the season at 166.
Here the second half values are much lower than the entire season totals.
The Pythagorean ratings would have won this category last year as well.

  Best Predictive System (Entire Season)

Winner:  Ed Kambour Football Ratings, Ed Kambour

  This came somewhat of a surprise.  To come up with the award I gave points for
each category.  The system with the most points in the 'entire season' data is the
winner.  In an extremely close vote Ed Kambour's Football Ratings come out on top.
The numbers for the top 4.
  1. Kambour  67
  2. ARGH     66
  3. CPA      64
  3. YourLinx 64

  Best Predictive System (Second Half)

Winner: Pythagorean Ratings

  This award was also determined by giving points for each of the above
categories and summing for each system.  For the second half there was
no competition.  The Pythagorean Ratings ended up with almost twice as
many points as the 2nd place system, scoring effeciency.

 Best Predictive System Overall in 2000.

Winner: Scoring Effeciency

  Best overall considers both full season and second half data.
Since the second half was only half the season I only gave it
half the weight of the full season.

  1. Scoring Effeciency  63
  2. Pythagorean Ratings 60
  3. PerformanZ w/ HFA   56
  4. Vegas Line          50
  5. CPA                 46

                RETRO AWARDS

  The season is long gone now but I wanted to finish this off
before baseball season starts up.
  The retrodictive categories are based on each systems final
standings and measure things by applying those final ratings
to the entire season in retrospect.

  Most Retrodictive Wins

Winner: CPA Rankings, Steve Wrathell

  The best retrodiction record this past season goes to Steve Wrathell's
CPA Rankings.  CPA also won this award for his NCAA rankings.  CPA's
retro record was 189-70, or about 73%.  That was three games better than
second place Ken Massey.
  I also have to make note of my sytem.  After the season was over I plugged
my PerformanZ using a team specific homefield advantage into the program and
it came out even better than CPA at 192-67.  So look out next year.

  Smallest Retrodictive Absolute Error

Winner:  Least Squares using a team specific home field advantage.

  This category is interesting just to see who could come close to matching
the regression methods.  What least squares does is minimize the
squared error terms.  And the least absolute error regression rankings
minimizes this absolute error.  Least squares using individual home
field factors ended up with the lowest average absolute error at 9.27
and least absolute regression was second at 9.4. I would suspect that
a least absolute value regression using team specific home field factors
would be about as good as you could get here.

  Smallest Retrodictive Bias

Winner: Massey Ratings, Kenneth Massey

  Kenneth Massey's NFL Ratings win this award for the smallest bias
with a bias of almost zero, -.006.
Several other systems do a very good job of this.  Sagarin is the next
lowest at +.03.  In some sense this is a measure of how well you are
measuring home field advantage.  Since the value was around 2.90 this season
anyone using the traditional value of +3 was very close.

  Retrodiction System of the Year 2000

Winner: CPA Rankings, Steve Wrathell

  Steve Wrathell's CPA Rankings take home the award for retrodiction system
of the year. CPA was near the top in every category.  The closest competition
for the overall title was Kenneth Massey.  Massey fell a little bit in the
mean error category.  If it was not for a large bias Least Squares using
team specific home field advantage could have been the winner.