How Did We Do? A Review of 2022 Before Our First Poll of 2023.
Here’s a list of survey results of the 2022 midterm elections, all from the same pollster. As you read them, think about whether you think this pollster’s results were good or bad or whatever adjective you’d like.
Poll: D+6; result: D+2.4
Poll: R+4; result R+1.5
Poll: D+5, result D+4.9
Poll: R+5; result R+7.5
Poll: EVEN; result D+0.8
Poll: D+3, result D+1
All right, what did you think?
I hope you thought they were at least good, because this is a sample of about half of our final New York Times/Siena College polls in 2022. On average, the final Times/Siena polls differed from the actual results by 1.9 percentage points — the most accurate our polls have ever been. Believe it or not, they’re the most accurate results by any pollster with at least 10 final survey results in the FiveThirtyEight database dating to 1998. We were already an A+ pollster by its measure, but now we’ve been deemed the best pollster in the country.
My hope is that most of you thought those poll results were good, but I’d guess you didn’t think they were incredible. They’re not perfect, after all. And I can imagine many reasonable standards by which these polls might not be considered especially accurate. They certainly weren’t objective truth, which we might usually think of as the standard for Times journalism.
Even so, this level of accuracy is about as good as it can get in political polling. We may never be this accurate again. There may be room to debate whether “great for political polling” is the same as “great,” but if you’re judging polls against perfection it may be worth scaling back your expectations. Even perfectly designed surveys will not yield perfect results.
Nonetheless, we try to be perfect anyway. With the data from 2022 in and final, we’ve been poring over the data — including our experiment in Wisconsin — to identify opportunities for improvement. I must admit this has been a less urgent (and more pleasant!) experience than similar exercises after prior election cycles, which have felt more like an “autopsy” or “post-mortem” than a routine doctor’s visit.
Still, I did make sure to get our polls in for their biennial checkup ahead of our first national survey of the cycle, which is in the field as I type. More on that later, but for today here’s the good news and some bad news from our dive into last year’s polling.
Good news
-
Our polls were right for the right reasons. With one interesting exception (which we’ll discuss later), they nailed the composition of the electorate, the geographic breakdown of the results and the apparent results by subgroup.
-
The raw data was quite a bit cleaner, for lack of a better word, than it was in 2020. Back then, the statistical adjustments we made to ensure a representative sample made a big difference; without them, our polls would have been far worse. This time, the final results were only about a point different from our raw data. It’s hard to tell whether that’s because of refinements to our sampling or because survey respondents have become more representative in the wake of the pandemic or with Donald J. Trump off the ballot, but it’s a nice change either way.
-
The big Wisconsin mail experiment — where we paid voters up to $25 dollars to take a mail survey — didn’t reveal anything especially alarming about our typical Times/Siena polls. There was no evidence to support many of our deepest fears, like the idea that polls only reach voters who are high in social trust. There was no sign of the MAGA base abstaining from polling, either. On many measures — gun ownership, evangelical Christianity, vaccination status — the Times/Siena poll looked more conservative than the mail poll.
OK, now the bad news
-
The Wisconsin study didn’t offer easy answers to the problems in polling. Yes, it’s good news that the problems aren’t as bad as we feared, but we went to the doctor’s office for a reason — the state of polling isn’t completely healthy, and we’re looking to get better. We may have ruled out many worst-case diagnoses, but a clearer diagnosis and a prescription would have been nice.
-
The Wisconsin study did offer ambiguous evidence that Times/Siena phone respondents lean a bit farther to the left than the respondents to the mail survey. I say ambiguous partly because the Times/Siena telephone survey isn’t large enough to be sure, and partly because it doesn’t show up in the top-line numbers. But if you account for the extra tools at the disposal of the Times/Siena survey (like ensuring the right number of absentee vs. mail voters), the mail data does lean more conservative — enough to feel justified in going to the doctor.
-
This modest tilt toward the left appears mostly explained by two factors I’ve written about before. One: The less politically engaged voters lured by a financial incentive appear to be ever so slightly more conservative than highly engaged voters. Two: People who provide their telephone numbers when they register to vote are ever so slightly more Democratic than those who do not, and they respond to surveys at disproportionate rates as well. It’s not clear whether these issues would be so problematic in other states where there’s additional information on the partisanship of a voter compared with Wisconsin.
-
We did get lucky in one big case: Kansas’ Third District. Our respondents there wound up being far too liberal, yet our overall result was mostly saved by grossly underestimating the vigor of the Democratic turnout. In a higher-turnout election in 2024 — when there’s far less room for turnout to surprise — we wouldn’t be so lucky.
-
Mr. Trump wasn’t on the ballot. That’s not exactly bad news, but it might be in 2024 if his presence in some way increases the risk of survey error by energizing Democrats to take polls while dissuading the already less engaged and irregular conservatives who only turn out and vote for him.
What we’ve changed/what we’re changing
We’ll make a number of fairly modest and arcane changes to our Wisconsin and state polls, reflecting a series of modest and arcane lessons from the Wisconsin study. But so far none of these insights have yielded fundamental changes to our surveys heading into 2024. That said, there are a few larger tweaks worth mentioning:
-
When deciding whether someone is likely to vote, we will rely even less on whether voters say they’ll vote, and more on their demographics and whether they’ve actually voted in the past. This is the third cycle in four — with the exception being 2018 — when we would have been better off largely ignoring whether voters say they will vote in favor of estimates based on their demographics and voting record. We won’t ignore what voters tell us, but we will look at it that much more skeptically when estimating how likely someone is to vote.
-
We’re reordering our questionnaires to let us look at and potentially use respondents who drop out of a survey early. This isn’t usually an issue for us — our state and district polls have never taken longer than eight minutes or so to complete — but about 15 percent of respondents who made it to the major political questions on our longer national polls and the Wisconsin study later decided to stop taking the survey. Not surprisingly, they’re the kind of low-interest voters we need the most.
-
When it comes to Republican primary polling, we might adjust our sample — or weight it — using a new category: home value. In our two national polls with the Republican primary ballot last year, home value was an exceptionally strong predictor of support or opposition to Mr. Trump, even after controlling for education.
Overall, Mr. Trump had a lead of 60 percent to 17 percent among people whose homes were worth less than $200,000, based on L2 data, while Ron DeSantis led, 47-24, among respondents whose homes were worth more than $500,000.
I don’t think these changes will make very much of a difference, but we’re putting it to the test in the Republican primary now.
There’s one last change to mention, one with no effect on the qualify of our polls: For candidates who receive less than 1 percent of the vote but over 0.5 percent, we will record them as less than 1 percent (<1%) in our crosstabs and documentation.
Why? The Republican Party is using survey results to help determine who qualifies for primary debates. Among other requirements, candidates need at least 1 percent in at least three national surveys, or two national surveys and two early-state polls. Usually, we round results to the nearest whole number, which means a candidate with 0.6 percent of the vote would be reported with 1 percent of the vote. But with Republicans setting a 1 percent threshold for debate inclusion, it isn’t so clear whether rounding is still appropriate. My view: One percent means reaching a full 1 percent. Respondents to a Twitter poll — unscientific though it may be — seemed to agree by a three-to-one-margin.
Why am I telling you this? We wanted to make sure we disclosed this change while the survey was still in the field and before we knew the effect, lest someone suggest we’ve changed our practices to exclude certain candidates.
Read the full article Here