One of the biggest gripes radio broadcasters have about Nielsen is small sample sizes. The belief is that insufficient sample sizes cause ratings bounce – erratic swings in audience estimates. The thinking goes that having more panelists or diary-keepers would lead to more stable ratings. Or would they? A new analysis of TV audiences by veteran media researcher Bill Harvey suggests that “bounce” is real.
In a piece penned for Media Village titled “National Rating Swings are Mostly Real -- It’s Reality That’s Unstable,” Harvey compared ratings data from Nielsen’s national TV panel with big data from set-top boxes and found they “seem to move closely together, going up and down at the same time, and almost to the identical degree.”
Harvey examined 49 TV programs across 46 networks. The pattern of agreement between panel and set-top-box data in terms of the movements up and down in program ratings from telecast to telecast “made me feel more confident than ever in the accuracy of both methods,” he says in the Media Village article. “Generally, when you see two signals moving in parallel, they are validating each other, there is a single truth being accurately measured by both of them.”
Among the findings:
The telecast-to-telecast change was in the same direction for panel and set top box data 94% of the time.
Another 1% of the time the two were almost parallel, but not going in exactly the same direction.
The remaining 5% of the observations showed different directions of change.
Despite a huge sample size advantage, set top box changes were only slightly smaller on average than panel. Harvey said this suggests “that these are mostly real changes, with only a small contribution from sampling errors.”
Harvey’s conclusion is that “the ragged ups and downs of ratings,” long assumed to be the fault of panels with small sample sizes, aren’t the result of measurement instability at all. Instead they represent “swings in real audiences in and out of a program from week to week, because of the beckoning effect of immense choice.” Or what Harvey dubs “reality instability.” The shifts in program performance from Nielsen's panel are mirrored in TV set-top big data.
Whether the “bounce is real” conclusion applies to radio could fuel endless debate. Such a comparison would need to consider differences in how radio is consumed compared to TV. And radio currently doesn’t have a big data equivalent to TV’s set top box data to make a comparison like Harvey did for TV, although several providers are working on that.
“The Nielsen One approach for radio will integrate big data like streaming tuning to enhance and improve their panel data,” says Cumulus Media Chief Insights Officer Pierre Bouvard. “The question is, will vastly larger sample sizes eliminate the wobble in radio rating trends?”
Comments