**Method**This deep dive takes each draft pick and rates it based on something called ‘surrounding average’. This is the player’s overall future value compared to that of the other picks made around it. The method used is to calculate the average (mean) value of the 16 picks prior to and subsequent to each selection to see how the pick’s value fares compared to the surrounding average. The simplest way to explain is by example:

*KCB selected OT Josh Baumert with pick 36.**His overall future value is 48.*- The 16 players selected prior to pick 36 have a mean value of 56.938.
- The 16 players selected subsequent to pick 36 have a mean value of 46.375.
- The mean of the above two figures is 51.657.
*Therefore the selection of Baumert is rated as 48 – 51.657 = -3.657.*

*Note that all calculations of means were done to three decimal places, but for reporting purposes values have been rounded to the nearest integer.*This is not, as such, evaluating whether or not a player is any good, rather it is rating the GM’s selection at the point in the draft where the pick was made. A 50 rated player is a decent player irrespective of whether he was selected in the first 10 picks or all the way down in round 7 – this study will reward a team that finds a ‘50’ in round 7 but will mark down a team that selects a ‘50’ at the top of the draft.

**Supporting logic for the method**

By examining the 16 selections before and after each pick this is a total of 32 overall, so generally all teams will have made a selection over that range of picks, so each team is being compared to every other team. Had fewer than 16 selections before and after been used, then the comparison would have been, predominantly, against the same sub-set of teams due to teams tending to pick in the same position in each round. In practice, due to trading picks and picks lost due to cap violations, the 32 picks will not always pan out evenly across all teams, but nevertheless this seems a reasonable approximation to use.
**Exceptions**

The first and last 16 picks of the draft will not be able to compare to the 16 prior picks and 16 subsequent picks respectively because there aren’t that number of picks available. Therefore, for these picks as many comparisons as possible before and after are made instead, so overall pick 2 is compared to the value of picks 1 and 3; overall 3 is compared to picks 1, 2, 4 and 5; overall 4 compares to 1, 2, 3, 5, 6 and 7 and so on. A similar process operates at the back end of the draft. For the first and last picks, no comparison is made – these picks are counted as zero.
**Alternative counting method**

Rather than comparing a player to the 16 players before and after, the study could instead compare each pick only to players selected *after*the pick, given that a GM has no control about who is selected ahead of his pick. For reasons already stated, 32 picks is a good number to examine, but taking the 32 players after a selection means that the top of the first round will be over-rewarded because there are fewer very highly rated players in a draft class. Also, round 7 could not be evaluated using this method. The decision to use 16 picks before and after the selection was made as it gives a better ‘flavour’ of how the player fitted in with his peers, and with the adjustments described regarding the tweaks made at the top and bottom of the draft, every pick can be evaluated.

**Position considerations (or lack of)**

A 60 rated QB is, generally, a more valuable asset than a 60 rated player at any other position. This study does not cater for this – it uses raw player values irrespective of position; two 60 rated players are deemed to be the same irrespective of position.
Some positions, in particular kicker, punter and fullback, may inflate a team’s draft score because typically these positions are selected late in the draft, so for example a 60 rated kicker might be selected amongst a wide range of 40 rated players at other positions. The team picking the kicker will be rewarded disproportionately for the selection. Nevertheless, knowing the right time to select such players is a skill in itself, so all things considered it was decided to retain kickers, punters and fullbacks in this study.
**Note regarding player ratings**

The player ratings used are those per the KC Bees scouting staff at the ‘reveal 2’ stage of the 2019 pre-season. Other teams’ scouts will generate different results. Furthermore, at season’s end, some players will have slightly different ratings but, generally, the end-of-season values are close enough to those at ‘reveal 2’ to make the study worthwhile. Any draft analysis will change over time as players develop, or regress, so this study is really a snapshot at the point that the 2019 season began.
**Results**

Enough of the blurb! Who are the hotties and the notties from the 2019 draft?
*Top 10 picks based on value over surrounding average.*

*Worst 10 picks based on value over surrounding average*

*Overall team ratings and selections*The images that follow show all selections, totalled by team. Positive picks are hot (shades of red), negative picks are cold (shades of blue). The best 50 picks are red, all other positive picks are pink; the worst 50 picks are dark blue, all other negative picks are light blue. Zero value picks are green. The numbers preceding the player names are the overall pick number. The following 14 teams had a positive total over/under. The following 18 teams had a negative total over/under.