Saturday, December 17, 2016

Not Your Forefathers' Electoral College

All of a sudden people care about the Electoral College again, and people are making all sorts of arguments about what should or shouldn't happen based on some "historical" arguments about what the Electoral College was supposed to be. It's clearly an example of people cherry-picking "facts" to support their argument and ignoring those details that inconveniently don't.

A fair and complete reading of the historical record would show that neither side in the current struggle is completely supported by the facts, however, it is reasonable to conclude that if the Electoral College were to function the way it was originally intended to Donald Trump would probably not become our next President.

Why have an Electoral College in the first place?

Ordinarily, most people just see the Electoral College as an arcane, but relatively benign, quirk in our Constitution. It usually follows, but often exaggerates, the popular vote and so no one really thinks twice about it.  That is until you get to an instance like we had this year (and 2000) where the winner of the popular vote doesn't win in the Electoral College.

If you ask most people under normal circumstances outside of the context of an election like this one, they will tell you that they think the Electoral College is silly and we should just go with the popular vote.  Until you get an election like this one. Then, all of a sudden, people become staunch defenders of the Electoral College simply because it means their candidate will win.

But what was the point of having it in the first place? I've seen a lot of people offering up arguments that are incomplete at best. And they are strategically incomplete because they only want to include the parts that support their side. But let's take a look at the full story to understand what it was that the Framers of the United States Constitution were really thinking when they devised this strange system of selecting a President.

The Framers didn't trust government. That's why, when they attempted to craft a new form of government, they went to great lengths to try and restrain it. They'd seen the abuses of governmental authority that led them to rebel against British rule and replace it with their own.

The first attempt went too far.

The first Constitution of the United States, the Articles of Confederation, was an overreaction to our Founding Fathers' fear of government authority. It created a very weak national government and gave most authority to the State governments.  Whether or not that sounds like a good idea is the subject of a separate argument, but there is a key detail in this: In creating a weak national government, they did not include a provision for an Executive.  There was no "President of the United States" under the Articles of Confederation. As a result, there was no one at the national level who was responsible for making sure that the laws were carried out.

After operating under that constitution for about a decade, it became clear that it was insufficient to deal with the problems that the new country was facing. Eventually the problems became so bad, and the desire to fix them so great, that it was decided that the Articles of Confederation needed to be fixed. So a Constitutional Convention was convened in Philadelphia in 1787.

The Articles of Confederation were essentially tossed out and they basically started over from scratch.  Many things were different about this new Constitution, namely that it created a national government with substantial authority. Part of that included creating an Executive Branch responsible for making sure that the laws were "faithfully executed" (hence the name, you see).

That meant putting someone in charge, and that raised the same concerns they had under British rule: that governmental power, and Executive power in particular, could be abused.  It needed to be checked in some way.  Related to this question was the decision over how such an official would be selected. This, then, becomes the story of how the Electoral College came to be.



One of the first proposals to be considered regarding the selection of the President was a simple notion of direct popular election: Just have the (male, white) people pick the President in an election.  This was not a very popular idea at the Convention for one very simple reason: The Framers were, for the most part, doubtful about the people's ability to select a President wisely.

It was the 1780s. The state of communication, transportation, and education was not what it is today. There was a reasonable concern that the voters wouldn't really be able to make a wise decision about who should be President because they were ill-equipped to do so. They wouldn't be able to discern who the "best" person for the office would be because, simply put, they didn't know much about governing and the law themselves. Furthermore, it wasn't likely that voters would know much about people outside of their own state... people who would be qualified and deserving of the position.

Beyond that, and even more telling, were the concerns that the public could easily fall prey to what America's most famous Broadway rapper/Founding Father, Alexander Hamilton, referred to in Federalist Paper #68 as someone with "talents for low intrigue, and the little arts of popularity."  Simply put, the Framers feared that if the election were left up to the people themselves, the office would end up falling to a demagogue... someone who would simply rise to the office because he knew what emotional buttons to push with the voters.

So, selection by direct popular election was rejected by the Framers at the Constitutional Convention because they didn't think the American public would be informed enough to make the selection themselves, and could too easily be persuaded by someone skilled in "the little arts of popularity." Basically, the thing they were most concerned about was something exactly like what ultimately happened in France in 1804: Through popular appeal Napoleon convinced the French people to proclaim him Emperor through a popular vote known as a plebiscite.

Even though they had significant doubts about the ability of the public to choose a President without falling prey to demagoguery, they still felt the people should be involved in some fashion. They were, after all, trying to create a system of popular sovereignty where the people rule. It's why the Constitution begins with the words "We the people..."

The people had to have some kind of say, just mediated in some way.

They considered selection by Congress, somewhat akin to how Prime Ministers are selected in parliamentary systems... with the exception that the President would not be a member of the the legislature. It satisfied the requirement that the people be involved in the selection of the President because the people would have a hand in the selection of Congress.  The "voice of the people" therefore would be heard through their duly elected members of Congress, who would presumably be better informed about the complexities of government and policy, and know of individuals who would be qualified to appropriately and effectively exercise the Executive Power of government.

But this, too, had a significant drawback: It would create a President who would be beholden to Congress for his position. (Yes, I'm using the masculine pronoun here because it was 1787 and they clearly thought that governing was solely the province of men). Since having an executive who was independent of the legislature was critical to their plan of having each branch of the government serve as a check on the others, this wasn't going to work. If you let Congress pick the President, they would simply pick someone who would let them do what they want and not be an effective check on their power.

So... scratch that idea.

What's left? That's where the idea for the Electoral College came in: Have the people select individuals (who were not members Congress) who would choose the President on their behalf.  It was a simple solution that satisfied their requirements of a) popular sovereignty, b) selecting someone who was qualified for the office, and c) was independent of the Congress.  In Hamilton's words, the goal was simple:
the immediate election should be made by men most capable of analyzing the qualities adapted to the station [Federalist Paper #68]
In other words, the voters couldn't be trusted with the decision, but the Electoral College could be. It would be made up of those who would best understand the unique qualifications required to govern effectively as President.

The idea was simple. Voters didn't vote for President. They were to vote for Electors. They didn't see the names of Presidential candidates on the ballot.  They saw the names of potential Electors. They were to vote for someone they felt was knowledgeable about government and would be able to make a wise and independent judgment about who would be most qualified and fit to serve as Chief Executive of the national government.

They were pretty confident they'd come up with a good plan. Hamilton went so far as to call it "perfect", and if not perfect, it is at least excellent. He went so far as to argue in Federalist #68
The process of election affords a moral certainty, that the office of President will never fall to the lot of any man who is not in an eminent degree endowed with the requisite qualifications.
 So, what are the "requisite qualifications" of which Hamilton speaks? Well, the Constitution is particularly silent on that point.  The only qualifications that the Constitution lists are simple: A natural born citizen who is at least 35 years old and a resident of the United States for at least 14 years.

That's it.

Surely they must have figured that there should have been something more than age and citizenship requirements, right? I mean, why go through all the effort of devising a method of selection that would assure that only the most qualified would become President if all they had to do is determine how old a person was, where they were born, and how long they'd been living in the country?

The simple answer is that they didn't include a more exhaustive list of qualifications because they felt they didn't need to. The reason they didn't need to is because they understood that the people who would actually be selecting the President, the Electors, would know what those requisite qualifications should be. That's why they were going to be entrusted with the task of choosing the President.

It is reasonable to conclude that they expected the President to be someone who understood the complexities of policy, the details of the Constitution, and how to maneuver the mechanisms of government. It is eminently clear that they did not think that it was desirable to have someone who would govern on the basis of popular appeal, which they viewed as dangerous.

The Perversion of the Electoral College

It's pretty clear now that what the Framers intended is not what we have now.  The Electors do not use wise and independent judgment. The expectation now is that they're just supposed to vote for the candidate that wins the popular vote of their state and not exercise any judgment at all.  Indeed, if they do choose to cast their vote for someone other than their state's popular vote winner, they are referred to as "faithless."  How dare they go against the "voice of the people?"

Indeed, we've dropped all pretense of choosing Electors based on their knowledge and judgment. We don't even list Electors' names on the ballots anymore. We just see the names of presidential candidates. Maybe there's a reference to Electors on the ballots in some states, but most voters don't really know who those people really are or how they came to hold that position.

So, how did we get to that point? How did we get from the idea that Electors were given a great responsibility to wisely select a President on behalf of the people to becoming wielders of a rubber stamp who were not expected to think but to simply act?

The simple answer is that political parties happened.

Political parties developed once the Constitution had been ratified and the new government had been in operation a few years.  People figured out the "rules of the game" of our election system and realized that there was strength in numbers.  If they could organize and coalesce ahead of time, they could "rig" the game in their favor.

You see, there's nothing in the Constitution about nominations. There are no provisions for primaries and caucuses and delegates and Conventions. All of those trappings that we've become accustomed to as the normal operation of our presidential election process all developed after the first couple of elections and outside of the Constitution.

It was all part of a move to game the system. The way I like to describe it to my students is that it's a lot like what happens on Survivor: A tribe loses an immunity challenge so they have to go to Tribal Council and vote someone out.  The key is what happens in the jungle the afternoon before they go to Tribal Council.  Alliances form and an attempt to predetermine the outcome of the vote takes place. The members of an alliance agree on who they are going to vote for because they know that there is strength in numbers, and as long as they all hold to the agreement, the outcome is already known before the vote happens.

That's what happens in our elections, or at least there is an attempt to make it happen that way.  You see, the Electors were supposed to be deciding independently among themselves for whom they were going to cast their votes. The voters didn't know. The other Electors didn't know.  They were supposed to make that decision on their own after careful deliberation.

And they weren't given a "list" to choose from.  They were supposed to decide on their own which 35+ year old, natural born citizen, and resident of the country for 14 years would be the most qualified to serve as President of the United States. "The voice of the people" was only supposed to be heard in the selection of these men who were chosen because they were trusted to take on this important task.  The Framers thought it quite likely that the outcome of the Electoral College votes would not produce the required majority, which is why they included the provision that would hand the decision over to the House of Representatives should any given candidate not receive a majority of the Electoral Votes.

They did not anticipate that the process would become rigged by "alliances" being formed and the parties deciding ahead of time who the Electors should vote for. This is evidenced by the fact that they originally established the rule that the winner of the Electoral College majority would become President and the runner-up would become Vice President. This created a scenario where the President would be of one party and the Vice President would be of the other.  The fact that they didn't see that problem coming is evidence that they didn't foresee how political parties would pervert their "perfect" system.

Derailing the system even further, potential electors would publicly announce for whom they intended to cast their vote if they were to officially become Electors.  So the idea that they would exercise wise and independent judgment quickly went out the window by the election of 1796.  The Federalist Party made their decision that John Adams was their preferred candidate and word went out to the States that those who supported the agenda of John Adams and the Federalist Party should vote for Federalist Electors who were pledged to vote for Adams. Similarly, the Democratic-Republican Party tapped Thomas Jefferson to be their nominee and this, too, was communicated to the public.

Thus, the "perfect" nature of the Electoral College quickly evaporated. The Electors had essentially become a formality, a technical mechanism through with the popular vote within the states would operate. It became more formalized over time. The "unit rule" was instituted to essentially award Electors to the candidate who won the plurality of a state, and each state party would draw up a list of names of people who would serve as Electors in the event that the candidate of their party were to win the state's popular vote. These potential electors were picked for their loyalty to the party, not because they had any special qualification to make a wise and independent decision as to who was "in an eminent degree endowed with the requisite qualifications" to be President of the United States.

Simply put, the Electoral College does not function in any way like what the people who invented it intended it to. Whether or not you see that as a good thing unfortunately depends on whether or not you like the outcome it produces, not on the relative merits of its principles.


Current Arguments For and Against the 
Electoral College and Why They Fall Short

Not surprisingly, there's been a lot of discussion in the last several weeks about the Electoral College, why it was created, how it is supposed to work, and whether or not we should get rid of it.  All too often, the arguments I've seen become bogged down by one unfortunate problem: they focus only on those points that support their desired outcome and ignore some inconvenient facts that don't.

Those who supported Hillary Clinton point to its unfairness and distortion of the "voice of the people" (She did, after all, win the national popular vote by nearly 3 million votes), which is true. But they also have made the dubious argument that the Electoral College was created to preserve slavery. All the while, they ignore the broader argument that the Framers expressly rejected the idea that the national popular vote should determine the outcome.

Yes, the issue of slavery was an important consideration at the Constitutional Convention, and the conflict and compromise between free states and slave states permeates the entire document. But to suggest that the purpose of the Electoral College was to appease the Southern states and/or "preserve slavery" is a stretch at best.  The relative power of the states to each other was an important consideration only in so far as how Electoral Votes would be allocated among the states, not in the principle that led to the creation of the Electoral College itself. And the allocation of Electoral Votes was itself a secondary artifact to how representation was to be established in the Congress.  So, while the Electoral College may have helped preserve the balance of power between slave states and free states, that was not its principal purpose.

On the other hand, Trump supporters are also guilty of conveniently ignoring important elements of the Electoral College's creation.  The Electoral College, they argue, was created to prevent the people in the most populous states from having "too much influence."  This argument is just as dubious as the "preserving slavery" argument, for the exact same reason.  It's effect may have been to help balance the influence of the large states with that of the small states, but again, that wasn't the reason it was created. If anything, the concern about large states was only in regard to the fact that, given the state of communication in 1787, the Framers feared that people would only vote for a "favorite son" from their own state because that's all they would know.

The bigger problem for Trump supporters in the argument over the Electoral College is the very inconvenient fact that, in all likelihood, if the Electoral College were to function in the manner that the Framers had intended it to, Donald Trump probably wouldn't receive a single vote. It is not a stretch to imagine that Electors who were using wise and independent judgment would most likely not even consider someone who has had no government experience and has frequently demonstrated a fundamental lack of understanding about governing, policy and the Constitution.

I'm not saying it is impossible, but given the responsibility they were given and the expectation of deliberation that was placed upon them, it is highly unlikely they would say "Hey yeah, let's give it to the reality TV guy with no government experience whatsoever, has demonstrated a fundamental lack of understanding of how governing, policy, and the Constitution work, and would be entangled in a rather immense web of potential conflicts of interest should he become President... not to mention the strong evidence now that a foreign adversary has significantly attempted to influence the outcome of the election"

I'm not saying they would automatically favor Hillary Clinton, but they would most likely not even give Donald Trump a first, let alone a second, look.

Where Do We Go From Here?

There are no winners and losers in this one without acknowledging some very basic weaknesses in their own point of view. When you try to argue a point based on some presumed "historical fact" you would be best served if you actually knew all of the facts, and acknowledged even those facts that weaken your argument. 

Otherwise, you're not actually accomplishing anything other than making yourself feel good about holding the views you hold, and receiving validation from those for whom you've reinforced theirs.

In the end, the Electoral College is going to do what it's done since 1796: The Electors are going to dutifully write down the name of the candidate they're "supposed" to write down and Donald Trump will become the 45th President of the United States.  But it does beg the question: If they aren't going to do what the Framers intended them to do in the first place, if the Electoral College isn't going to function they way it was actually supposed to, then there's really no point in having it at all.

Moving forward, it would be nice if we could have a serious discussion about how we go about selecting a President. Do we trust the people to make a wise decision, or do we need the safety valve of the Electoral College?

If we determine that we really don't trust the people, then we need to change the way we think about the Electors and let them behave in a manner consistent with how the Framers originally intended them to.

If we believe that the "voice of the people" should matter and we can trust it, then the Electoral College runs counter to that principle. It distorts the voice of the people by giving greater weight to some people's voices than it does to others simply because of where they happen to live.

But if we operate from an "ends justify the means" principle and think "Well, I like that my candidate won so I like the way we pick the President" then we're not really going to do anything.  Just consider how the reaction would be if it had gone the other way.  The cries of "The system is rigged!" would be deafening and the push for reforming it would be overwhelming.

Just look at how President-Elect Trump reacted four years ago when it looked like Barack Obama was going to win the Electoral College but lose the popular vote:



Remember that if/when the outcome goes the other way at some point in the future.



Suggested reading, because I'm a professor and therefore can't resist the urge to try and make someone read something:

The Avalon Project : Federalist No 68 - http://avalon.law.yale.edu/18th_century/fed68.asp
Alexander Hamilton's explanation of, and justification for, the Electoral College

Edwards, George C. "The Faulty Premises of the Electoral College." In Nelson, Michael The Presidency and the Political System. 10th Ed. (Washington, DC: CQ Press)
A point-by-point takedown of the justifications for the Electoral College by one of the nation's leading presidential scholars.

Monday, December 5, 2016

Politically Uninformed and Unaware Of It: The Dunning-Kruger Effect

Given the current degree of interest in the pervasiveness of political misinformation and the role that it may have played in this year's election campaign, an important question to ask is whether individuals have the ability to recognize that they're misinformed.

As it turns out, as part of the survey I conducted earlier this year I'd set about to examine that very question.  Simply put, my research question was this:

Do uninformed people realize just how uninformed they are? And if they don't, can their lack of awareness be explained in any way?

In 1999, a couple of Cornell University Psychology professors, David Dunning and Justin Kruger, documented the phenomenon wherein individuals who lack certain skills also tend lack the ability to recognize that they lack those skills. They presented their findings in an article in the Journal of Personality and Social Psychology and discussed the phenomenon which now bears their name: the "Dunning-Kruger Effect."

Their study was fairly straightforward: They presented their subjects with a series of tests to assess their ability in a number of areas. Afterwards, they asked each of the subjects to assess how well they'd done on these tests.  In each instance, the subjects who scored lower consistently over-estimated their actual abilities. Dunning and Kruger took this to mean that people who performed poorly lacked the metacognitive skills to realize it.

 Simply put: "incompetent" people lacked the ability to recognize their actual level of "incompetence."

Using this research as a starting point, I employed the same strategy in a survey I administered this Summer to a national sample of American adults. The ability test I used was a standard battery of ten factual questions about politics (e.g. "Do you happen to know which political party currently has the most members in the United States House of Representatives?" "How many times under current laws can someone be elected President of the United States?" "What political office does Paul Ryan currently hold?" etc). Each respondent thus received a Political Information score ranging from 0 to 10 based on the number of correct responses they gave to these questions.

I then had them assess their own abilities in two ways. First, I asked them after each question how certain they were on a scale from 1 to 4  that they'd answered it correctly. 1 indicated they weren't certain at all and 4 indicated that they were very certain.  In addition, at the end of the "test" I asked them to estimate how many of the questions they'd gotten correct.

The results I obtained from this are consistent with those found by Professors Dunning and Kruger. The individuals with lower Political Information scores significantly over-estimated how much they actually knew. Simply put, they were politically uninformed and unaware of it.

The graph below shows the basic pattern demonstrating this:



The horizontal axis represents people's Political Information scores.  The further to the right, the more questions they got correct.  The red line represents the average estimate of the number of answers respondents believed they got correct, arranged according to how many they actually got correct. For example, people who got none of the questions correct stated that they believed they got, on average, between 3 and 4 correct. Those that got four or fewer correct pretty consistently over-estimated the number of correct responses they actually gave. Those that got five or more correct were significantly more likely to also correctly estimate their number of correct responses, although those at the highest end of the scale tended to underestimate their ability, but only slightly so.

Of course, it could be that people with lower levels of Political Information simply aren't any good at guessing the number of correct responses they gave. That's where the additional self-assessment measure comes into play.  By asking them how confident they were that they'd answered each question correctly we're not just measuring their ability to guess the number of correct answers, but their level of confidence in what they believe they know about politics.

For each respondent, I calculated an overall confidence score by adding up each of their 4-point confidence ratings.  A person who was "very certain" that they'd answered all ten questions correctly would then receive a confidence score of 40. A person who was "not certain at all" that they'd answered any of the questions correctly would have a confidence score of 0.

The results in the graph below are similar to those presented above and confirm the interpretation that those with low levels of information weren't just bad at guessing their number of correct responses, they truly believed they knew more than they actually did. In many instances they were "very certain" that they knew something was a "fact" when it actually wasn't a fact at all. Indeed, the gap between their level of confidence and that which they would have been justified in believing was even greater than was suggested by their simple overestimation of correct responses.


So, not only were they politically uninformed and unaware of it, they were significantly over-confident of their own ability.  They weren't just uninformed, they were misinformed.

I'm sure many of you may think you know a person like this. And I'm fairly certain what the next question many of you might want to ask next is: Which candidate's supporters are more likely to arrogantly believe they know more than they actually do?

Who Are The Overconfident Overestimators?

To determine this I calculated two additional scores: overestimation and overconfidence. Overestimation is simply the difference between the number of responses they thought they got correct and their actual number of correct responses.  I measured their overconfidence by adding up their certainty scores for each question they got wrong. This differentiates the person who got a question wrong but wasn't certain they did, from one who had gotten it wrong, but believed they hadn't. In short, it highlights the difference between being simply uninformed and being misinformed.

Despite what I'm sure many of you might want to be true, the results showed that neither candidates' supporters were more likely to overestimate or be overconfident. There was no significant difference in the likelihood of Clinton or Trump supporters to either overestimate their level of information, or to be overconfident in their misinformation.

However, what I did find was actually quite interesting: While there was no relationship between the direction of a person's political leanings and their level of overestimation or overconfidence, there was a significant pattern associated with the extremity of their views.

I had people place themselves on a standard 7-point political ideology scale where 1 meant "Extremely liberal" and 7 meant "Extremely conservative."  The middle of the scale, 4, represents "moderate."  Neither side of the scale showed a greater likelihood to overestimate their level of information or be overconfident in it, but when I folded the scale in half to measure not whether they were liberal or conservative, but  how extreme they were in their ideological views a very clear pattern emerged: Extreme liberals and extreme conservatives were both more likely to overestimate their level of information and be overconfident in their misinformation compared to those at the ideological center.

The effect was not huge, but it was significant.  Those on the ideological extremes incorrectly believed they'd gotten approximately one more correct answer than they actually did than those who identified themselves as moderate. Their level of overconfidence was slightly higher as well. Simply put: With extremity comes certainty, and some of that certainty is clearly misplaced.

In addition, I also found that overestimation and overconfidence were both significantly correlated with one of the dimensions of anti-intellectualism I talked about in my previous post. Overestimation and overconfidence were both higher among those demonstrating higher levels of unreflective instrumentalism: the belief that questions the value of education, especially in its function beyond simply providing job training.

So, not only are they less informed and unaware of it, but they also tend to hold a certain animosity towards the thing that could make them less uninformed.

The Challenge of Combatting Misinformation

To be sure, there's more to being politically informed than simply knowing a handful of facts, but these findings possibly give us insight as to why it seems so challenging to try and "correct" a person's misinformation.  When people firmly believe something to be true, it makes it that much more difficult to convince them otherwise. Those who are more extreme in their views are that much more likely to have the firm conviction of their beliefs.

People want to feel justified in their beliefs. They'll seek out information that reaffirms what they already believe in order to make them even more confident in their position. Furthermore they will rigorously resist any attempts to challenge that.

Misinformation spreads because people want it to be true and, therefore, often don't do their due diligence to determine if it actually is true. The only real solution to the problem is to point out misinformation when you see it. There's no guarantee that it will actually be effective though. The real challenge is in finding a way to do it that will be effective and not cause them erect and reinforce their cognitive barriers of resistance.

There are no easy answers for doing that, but one thing is probably certain: Insulting someone while trying to tell them they're wrong about something is probably not going to be very effective. All too often that's what I see happening in online comment threads. Such "discussions" often devolve into counter-productive name-calling and insults. (I put "discussions" in quotes because they're often not really about an exchange of ideas and developing understanding, but rather winning "burn" points)

It may give the insulter an emotional payoff, but it's not likely to have much effect in making the uninformed insultee less uninformed.   This may be a less than satisfying solution for many, but politics isn't simple, why would you think that fixing people's misconceptions about it would be?

Monday, November 21, 2016

The Anti-Intellectual Election: I have seen the enemy and it is us.

In 1963 historian Richard Hofstadter published his influential and Pulitzer Prize-winning book entitled Anti-Intellectualism in American Life. In it he documented the historical and cultural roots of what he described as a resentment and suspicion of the life of the mind and of those who are considered to represent it. Evidence now suggests that the rise of Donald Trump, first as a candidate and now President-Elect, represents the latest and very clear manifestation of that resentment and suspicion. That's not just an impressionistic assessment. My recent research offers empirical support for it.

Understanding and Measuring Anti-Intellectualism

Earlier this year, I developed and administered a survey in which I tried to tap into the elements of anti-intellectualism that appeared to be rising to the surface in Trump's rhetoric.  Following Hofstadter's lead, I attempted to come up with a battery of questions designed elicit responses that indicated whether or not someone exhibited the resentment and suspicion he described.

However, one of the biggest criticisms of Hofstadter's analysis is that anti-intellectualism is a rather amorphous concept that can manifest itself in a number of ways.  25 years ago, Daniel Rigney, a sociologist at St. Mary's University dissected Hoftstadter's discussion of the socio-cultural roots of American anti-intellectualism and identified three separate, but interrelated dimensions:
  1. Populist Anti-Elitism – The belief that the values of intellect are, almost by definition, elitist in nature; that the educated classes are suspect, self-serving, and out-of-touch with the lives of “average Americans.”
  2. Unreflective Instrumentalism  – The belief that the value of education is primarily found in the immediate, practical end of job training, and spurns the more abstract notions of expanding one’s horizons and developing a deeper understanding of the human condition. 
  3. Religious Anti-Rationalism – The belief that science and rationality is emotionally sterile and promotes relativism by challenging the sanctity of absolute beliefs. 
As relevant as these dimensions may be, it seems to me that they make up an incomplete list, particularly in the modern political context that has evolved since Rigney's analysis. In particular, it seems that a suspicion of science and those who engage in it need not necessarily be rooted in the centuries old struggle between religion and science. As evidence of that claim, I bring up the research of yet another sociologist, Gordon Gauchat. In 2012, Gauchat documented the decline of public confidence in the scientific community over the past four decades, particularly among those identifying themselves as politically conservative. While he demonstrates that the lack of confidence in science is significantly correlated with religiosity (it's often not what religion people believe, but how much they practice that religion that matters, so we will often measure religiosity as frequency of church attendance), he also showed that it was fairly clear that such skepticism of science was not simply due to the fact that religion and science are often at odds with each other.

While opposition to such things as teaching evolution in public schools can logically be connected to religiosity, the same cannot necessarily be said about other issues where public perceptions are often at odds with the conclusions of the scientific community.  Battles over climate change and the safety of vaccinations and genetically modified organisms in food come to mind.  Opposition to the evidence and conclusions presented by the scientific community on these issues seem less likely to be due to religious objections but simply due to a related, but distinct, fourth dimension of anti-intellectualism:
  1. Anti-Scientific Skepticism - The belief that science, and especially those who practice it, are motivated by biases (political or otherwise) that render their findings and conclusions suspect, not on religious grounds but likely through a lack of scientific understanding, motivated reasoning or a combination of both.
From that basic framework, I developed a battery of Likert-type survey questions for each construct (In non-nerd speak: I presented them with statements like those presented below and then asked them to indicate on a 5 point scale ranging from 1 which meant they strongly disagreed with the statement to 5 which meant that they strongly agreed with the statement).  Examples of the items in each dimension are presented below:
A lot of problems in today’s society could be solved if we listened more to average citizens than we did to so-called experts. [Populist Anti-Elitism]

Universities and colleges place too much emphasis on subjects like Philosophy and the Arts and not enough on practical job training. [Unreflective Instrumentalism]

Often it seems that scientists care more about undermining people’s beliefs than actually solving problems. [Religious Anti-Rationalism]

Science has created more problems for society than it has solved. [Anti-Scientific Skepticism]

Anti-Intellectualism in 2016

With these questions and others like them, I created four anti-intellectualism scales (one for each dimension) by taking the average score across the items within each dimension.  I included these questions along with a series of other questions about the campaigns and the candidates running this year to a national sample of 1220 Americans from June through August earlier this year.

The results seem to confirm that Donald Trump's anti-intellectual/anti-establishment rhetoric appeared to strike a chord with a number of individuals. As the figure below shows, Trump supporters were significantly more likely than those supporting Clinton to hold anti-intellectual views on each dimension. (p < .001, which, in non-nerd speak, simply means that we can reasonably conclude that the differences we see here in the anti-intellectualism scores between Clinton and Trump supporters are real and not simply due to the fact that we only talked to 1220 people).



Using Anti-Intellectualism to Predict How a Person Will Vote

I took this a step further to see if we could use the anti-intellectualism scores to estimate the probability of supporting Donald Trump, while controlling for party identification.  To put it more simply: We know that Republicans are more likely to support Donald Trump than either Independents and Democrats. Does anti-intellectualism make Independents, and even Democrats, more likely to support Donald Trump? Are Republicans who score low on the anti-intellectualism scale less likely to support him than fellow Republicans who score higher?

To answer those questions, I did some (more) nerdy stuff. I ran a logistic regression with the vote intention question as the dependent variable and party identification and the anti-intellectualism scales as independent variables   Logistic regression is a statistical technique we nerds use when we want to see if certain variables (in this case a person's party identification and their anti-intellectualism scores) can help us predict what someone is going to do or say when they're given two choices.  In this case, those two choices were either saying they were going to vote for Donald Trump, or for Hillary Clinton. (Yes, I know there were other choices, but there weren't enough people in my survey who said they were planning on voting for Gary Johnson, Jill Stein, or someone else... so this works).

Simply put, it's basically a more rigorous test of the influence of anti-intellectualism on a person's decision about who they were planning on voting for than the figure above.  When I ran all the variables together as predictors of intended vote choice, only one of the anti-intellectualism dimensions ended up being a significant in addition to party identification: Populist Anti-Elitism.

The figure below shows how a person's party identification (whether they consider themselves a Democrat, Independent, or Republican) and their populist anti-elitism score effects the likelihood that they would indicate they were planning on voting for Donald Trump.  Each line represents people in each party identification category. As the lines move across the chart to the right, it indicates people with higher anti-intellectualism scores, and the higher the line is, the more likely it is that they will say they are planning on voting for Donald Trump.



As you can see, Republicans were much more likely to support Donald Trump than Independents, and even more so than Democrats.  That's not surprising; we would have expected that.  What's more interesting is the effect that anti-intellectualism has.  In all three groups (but mostly with Independents... just like we would expect), those with higher Populist Anti-Elitism scores were more likely to say they were planning on voting for Donald Trump than those scoring lower on the scale.

Simply put, anti-intellectualism mattered, even when we controlled for a person's party identification. Those exhibiting greater animosity and resentment towards those who are highly educated, were significantly more likely to support Donald Trump over Hillary Clinton.

Putting These Findings Into Context

So, what does this all tell us about what happened this election?

It perhaps gives us some insight into why many of the high-profile renunciations of Donald Trump we heard during the campaign seemed to have so little effect.  Newspaper editors, national security officials, former Presidents, government officials, and conservative and liberal pundits alike lined up in their vocal and detailed opposition to a Trump presidency over the course of the campaign. However, most of these appeals appear to have fallen on many deaf ears. Clearly, it seems that many of Trump's supporters felt they have lost their voice in the nation's political discourse (if they had it at all) and resent the way they've been talked down to, and about, by the "intellectual elite."

You may have seen memes circulating around the internet featuring a quote by noted science fiction author Isaac Asimov like the one below:



While it gets to the heart of Hofstadter's analysis of American culture that drove the research project I'm discussing here, this meme has always made me a little uncomfortable. Whether you agree with its sentiment or not, to those with less education it is likely to be seen as an arrogant criticism of "average Americans." It's an important lesson: If you want to convince someone that you have relevant and important information you feel they should know, insulting their intelligence is probably not the best way to preface your remarks. If they think you don't respect them, they're likely to return the favor.  It's probably why this survey item is so influential in my measure of Populist Anti-Elitism:

Highly educated people have an arrogant way about them.


The Obligatory Nerdy References Section

For the nerds reading this, or if you want to expand your nerdy credentials, here are the specific works I mentioned in this post:

Gauchat, Gordon. 2012. "Politicization of Science in the Public Sphere: A Study of Public Trust in the United States, 1974 to 2010." American Sociological Review. 77: 167-187

Hofstadter, Richard. 1963. Anti-Intellectualism in American Life. New York: Knopf

Rigney, Daniel. 1991. “Three Kinds of Anti-Intellectualism: Rethinking Hofstadter.” Sociological Inquiry. 61: 434-451. 

Sunday, November 13, 2016

The Trump Wave We Could Have Seen Coming (and sort of did)

Coulda, woulda, shoulda.

You may have noticed that this was a rough year for polling and election forecasting.

As I pointed out in my previous post, polls significantly under-estimated Donald Trump's level of support, especially at the state level.  As a consequence, the short range projections by a number of forecasters missed the mark and indicated that Clinton would likely win, and quite handily at that.

Of course, we know at this point that's not what happened.

The question that inevitably comes up in hindsight is, "What did we miss?"

The polling community is already trying to figure that out, at least from the perspective of how the polls could have been so wrong. But for those of us in the forecasting community who are dependent on the polls, that is of little consolation. Unless the data going into the model is accurate, whatever comes out won't be. Figuring out why the data was flawed still doesn't change the fact that we were wrong in our forecast.

But what if we could have seen the error coming? What if we had been able to incorporate that into our models? Would that have made a difference?

Well, as it turns out, there's evidence that maybe we could have seen it coming.  Not necessarily the structural failures in the polling itself, but the pro-Trump wave that the polling might have been missing.

I mentioned Helmut Norpoth in my previous post. Norpoth received a great deal of attention in the last weeks of the campaign for his forecast that showed Trump would win. He wasn't the only one. Alan Abramowitz is another forecaster whose model predicted a Trump victory. Many have pointed out that both Norpoth and Abramowitz were technically wrong in their forecasts because they predicted that Trump would win the popular vote, and Clinton is likely to be the popular vote winner.  Even so, their forecasts pointed in a direction that very few others did.

So, what did their models do that others' didn't?  All three (Norpoth has two: one that came out with a forecast not long after the 2012 election, and another that generated its forecast in Feburary) have a very basic premise: It's just difficult for a party to hold on to the White House for a third consecutive term.

In my previous post, I mentioned another model that I created last year that also showed a potential Trump victory. Unfortunately, for reasons I explained in my previous post, I set it aside in favor of the one I've been using with Tom Holbrook for the last 4 elections. One key difference between the two is that the one I came up with also models what Abramowitz has called the "two-term penalty:" Support for the incumbent party just automatically goes down after two terms as people are more likely to desire change.

What that tells us, or should have, is that there was likely a core of increased support for the Republican candidate despite what the polls showed. Many of us refused to see it.  Abramowitz doubted his own forecast because of what he called the "Trump effect." To be sure, there does appear to be some evidence that Trump under-performed what another Republican might have been able to do. That may help explain why he failed to win the popular vote despite Norpoth and Abramowitz's forecasts that he would.

It got me thinking, what if we tweaked our model?  What if I added a variable for the two-term penalty just like I'd done in my Long-Range model? Would it have made a difference?

Yes... somewhat.

If we had used this model to forecast the outcome in place of our usual September model, we still would have gotten the outcome wrong, but it would have been much closer to what actually happened.  The table below shows the side-by-side comparison of the two projections, along with what will likely be the actual outcome (according to the unofficial results as they've been reported on November 13).

Original
Model
ACTUAL
OUTCOME
Revised
Model
ClintonTrumpClintonTrumpClintonTrump
Two-Party
Popular Vote
52.1%47.9%51.1%48.9%51.0%49.0%
Error
+1.0%
-1.8%
-0.1%+0.1%
Electoral
College Votes
326212232306278260
Error
+94
-94
+46-46
Clinton Win
Probability
91.2%68.1%

As you can see, this simple adaptation of the model would have generated a more accurate prediction of the national popular vote, coming within 1% of the actual result. As the official results get reported, we may very well find that the projection from this new model might even be closer. We still would have gotten the Electoral College vote incorrect because it shows Clinton winning more than 270 Electoral Votes, but it would have been closer to the actual result and the associated win probability would have been a more accurate representation of how close the election actually turned out to be. [Edit: The table has been changed to show the official results, which shows that both the original model and the revised model were very close to the actual popular vote result.]

This still shows that the model is quite vulnerable to the pervasive error that plagued the polls this year, but with a simple addition we could have at least moderated some of its effects. It suggests there was support for Trump out there that the polls simply didn't catch, and that support was at least somewhat predictable  Once the official results come in, we'll most likely regenerate the model with the new adaptation and will use it to generate our forecast for 2020.  With the Republicans defending just a single term that year, it won't have much of an effect. The first true test of this new model will come in 2024 at the earliest.

Come back then. I'll still be here, looking at numbers, because it's what I do.

Thursday, November 10, 2016

On the (Dis)Comfort of Numbers

I love numbers. I guess that's obvious given the title I chose for this blog.

Politics is a subject that is laden with subjectivity. People have their biases, and when they engage with the world they view it through the lens of their biases.  When there are political discussions, especially online, they often devolve into over-simplified presentations of the opposing point-of-view and discounting them as simply the product of blind bias.  That latter part may be true most of the time, but it isn't always.

But that's why, as a Political Scientist I take comfort in numbers. It's why my Political Analysis course is my favorite course to teach. It's why I started this blog to try and explain to people who don't work with numbers what they can tell us about the political world.

Numbers are solid. Numbers are certain. Numbers can tell you things you would not otherwise know. Numbers can provide a sense of confidence in a position, or course of action.

Unless, of course, the numbers are flawed.

And that certainly seems to be the case with the polls leading up to the election on Tuesday.

Yes, the polls got it wrong.

And as a consequence, election forecast models, like ours, that relied upon the polls got it wrong.

As it turns out, national polls carry very little weight in our model, but state polls are quite important. - TPDN


I'll leave it up to those inside of the polling industry to figure out how they got it wrong and why, and I have confidence that they will.  The allegations that I saw hurled in the run-up to the election that pollsters were biased and deliberately cooking the numbers to show Clinton had more support than she really did are, simply put, ill-informed.

But it would also be wrong to say that those of us who work with numbers are unbiased.

We have biases, but they're not necessarily the one that people accuse us of.

I've been forecasting elections for many years now. Ours was actually one of the first to do what many of the most well-known models do now: generate a state-by-state Electoral College prediction. We first used it to generate a forecast for the 2000 election.  Like this one, that was an election that most of us forecasters got wrong.

But that was a different time. Election forecasting was a purely academic exercise within a relatively small corner of Political Science. Even within the discipline it was viewed by many as not really "science." I had colleagues question whether a publication I had in which we first presented our forecast model should even be considered "research."

My response then, as it still is today, was that it most definitely is scientific research. It is an attempt to apply the principles we've learned from decades of empirical Political Science research in an applied way. We're testing the explanations of voter behavior that we've gotten from that research to see if we can then predict what voters can do. Explanation and prediction are at the core of what science is all about.

So, getting the 2000 forecast wrong was a learning experience. Personally, it was humbling. I'd gone out on a limb: I'd publicly said what I thought was going to happen, and it didn't.

But it happened in a relatively small space. I announced our forecast at a small gathering of students and faculty at the small and relatively unknown regional state university where I worked at the time. It was a relatively low-risk move. Humbling, but not very embarrassing.

Tom Holbrook and I saw it as an opportunity to go back to the drawing board, as did all of the other forecasters who got 2000 wrong.  It offered up more data for us to make our models better, and we did.  We took that data, revised the model, and used it to generate an extremely accurate forecast in 2004, missing the national popular vote by less than half a percentage point.  Even so, there were issues: We got three states wrong. So we went back to the drawing board again to make it better.

We got the next two elections right. In 2008, we under predicted Obama's level of support somewhat, and we got two states wrong. Indiana and Virginia.  That was a reasonable amount of error, we figured, but we took the data, updated the model, and moved forward.  In 2012, we were near perfect, getting all 50 states correct in the preliminary forecast based on September data, but getting Florida wrong in the final Election Eve forecast. (We felt somewhat vindicated by the fact that it took Florida a few days to finally decide). We missed the popular vote by just over 1% in the September model, but by just under that with our final projection.

Even so, we made a minor adjustment (which, as a side note, wasn't the reason why we got it wrong this time. In fact, we would probably have been off by even more had we not made that adjustment), and went into 2016 with a great deal of confidence. In our tests of the model and seeing how it would have worked if we'd used it for previous elections, it was our best yet.

That confidence, of course, proved to be unfounded.

We didn't just get it wrong. We got it wrong by a lot.

We over-predicted Clinton's share of the popular vote by 2% and got 5 states wrong in our September forecast. We were even worse than that with our Election Eve forecast, getting six states wrong and missing the popular vote by an even wider margin. When the final analysis comes down, I won't be surprised if ours ends up being the worst performing.

We were right back to where we were in 2000.  But, of course, things are different now than they were back then.  Election forecasting is no longer a quaint cottage industry in Political Science. Thanks to folks like Nate Silver popularizing it, it's now a much more public "game" and we've seen an explosion in the number of models all trying to predict the outcome.

Compared to 538, New York Times Upshot, The Daily Kos, The Princeton Election Consortium, and the Huffington Post, Tom and I are relatively unknown outside the little election forecasting community in Political Science. Even so, our miss was more public this time compared to 2000. There's this thing called the internet that is much more ubiquitous now than it was in 2000, and I worked to try and get our model noticed. (In retrospect, maybe I should have been more quiet) But even so, we've not experienced anywhere near the level of vitriol and ridicule that others have seen. Natalie Jackson, the Political Science Ph.D. at Huffington Post has had the bear a good deal of it and it just sickens me.

A Tweet to Natalie Jackson, forecaster at Huffington Post... WTH?
For the first time since 2000, I made a public presentation of our forecast on the day that it came out. It was to a somewhat larger group of students and faculty than the one I had in 2000, and the local press was there as well. I was on public record with our prediction more than with any other election.

The numbers failed us, and the failure was more public this time.

Our model relies very heavily on polling data, and the polls were quite a bit off this time. As I stated before, I'll leave the polling post-mortem to the pollsters. I've done polling, but I don't consider myself a pollster by any stretch of the imagination. So I'll let the people who do it figure it out, because they know better than I do about what went wrong and how to fix it.

For Tom and I, we'll adjust like we always do. Science moves on, and failures are an opportunity to learn and improve. I've already got some ideas of what we can do to make the model better because, as it turns out, I think the polling error we saw might actually be predictable for reasons I'll save for another post. I look forward to diving into the data and figuring it out.

But for me, right now, this failure is more personal. I had friends and colleagues who were, and are, incredibly worried about the outcome. They trusted me to tell them what was going to happen; to give them assurance that, to them, the "unthinkable" wasn't going to happen.

My bias in this is, and always has been, in getting the forecast right.  It's not ideological. I don't push a forecast because it predicts what I want it to predict. I push it because I have confidence in it.  The numbers showed a Clinton win, and they showed it pretty convincingly.  The reports are now that even those inside the Trump campaign were preparing for a loss because their internal numbers showed them the same thing.

But our numbers were more than just numbers to a lot of people.  They provided certainty and comfort. All of a sudden, I became much more than scientist working with numbers. I'd become a counselor, listening to people's fears and anxieties, and attempting to comfort them with the numbers I'd come to trust.

It was a role that I was not entirely comfortable with.  Not because I didn't have confidence in the numbers, but because... WHAT IF we were wrong? I was going to let a lot of people down. They were trusting me, but I knew I couldn't control the outcome. I could only tell them what the numbers said.

My discomfort reached a high point when Nate Silver's win probability started to diverge significantly from the rest of the pack a week or two before the election.  Perhaps unbeknownst to most outside the forecasting community and those who follow it closely, a rigorous debate ensued about methodological issues, and what the win probability represents, and what we should actually infer from it.

For the most part, I stayed out of the fray.  I had our numbers and I had confidence in them. But I really couldn't deny the methodological soundness of some of the arguments that Silver was making. But by that point, I was set in my role as Election Therapist and I knew that if I openly acknowledged the growing uncertainty, that was going to upset a lot of the people who had been relying upon me for comfort. I took on their anxiety for them.

That was really stupid.

I'd let biases get in the way of scientific objectivity. Still, though, the bias wasn't ideological. It wasn't because I wanted Clinton to win and, therefore, want to push that narrative. A good part of the bias was simply because I knew friends and family were going to be upset if she didn't. They were worried and I didn't want them to worry more.

But beyond that, and enabling it, was the bigger bias was towards the numbers and the confidence I had in them. I had a sound, empirical reason to believe our numbers were right: they'd been right before and they were numbers that were similar to those that many others had as well. I even privately consulted with other forecasters asking how they felt about it.  The shared my same level of concern that projections might be a bit too bold. But there, in the end, was validation.

I took comfort in the herd.

Of course, now we know that comfort was ill-placed, and hind-sight is 20-20.  But now that the dust has settled I can see that there were signs that I missed simply because I took comfort in the numbers of the herd.

You see, Tom's and my forecast isn't the only one I do. A year ago, I developed another one on my own. It's partly based on the one we use, but it was an attempt on my part to push the envelope as far as the forecast lead-time was concerned.  One of the drawbacks to our model is that it has a comparatively short lead-time: because of the limitations on the availability of the data for the variables we use, ours comes out just one month before the election.

To the consumers of popular election forecasting that's probably not that big of a deal, but to those within the academic forecasting community it is.  Most models in the political science and forecasting literature come out two or three months in advance, others are even longer.  So, in that regard, ours comes pretty late.  What I was really interested in was to test how far I could push it, so I started putting a model together in the Fall of 2015, and called it my "Long-Range" model.  It was pretty simple, but it was unique in that it did something no one else did: Generate a state-by-state Electoral College and Popular Vote forecast a year before the election, months before we even knew who the nominees were. If you're interested, you can look at it here: State Electoral Histories, Regime Age,and Long-Range Presidential Election Forecasts:Predicting the 2016 Presidential Election

I presented the first prediction from it, which I updated monthly after that, at a small conference at the end of October: The Iowa Conference of Presidential Politics.  The conference was so small that my presentation was made to an "audience" five people, all sitting around a study table in the Library Reference Room at Dordt College in Sauk Center, Iowa.  The audience was comprised of mostly other academics who were also there to present a paper, none of which were forecasting papers.

That first prediction showed a strong likelihood that the Republican candidate would win. Given that the nominations hadn't happened yet, I had to generate a matchup matrix of possible candidates.  Here is that first matrix:

You can see that it indicated potential trouble for the Democrats, no matter who they nominated. When you look at the matchup between the two eventual nominees, it showed a very close race, but one that Donald Trump had a 75% chance of winning.

The comments I got from the discussant and the rest of the audience were incredulous:  "Donald Trump?  Really? How could that be? He doesn't even have any experience," they said.  Similarly, I had colleagues who scoffed at the finding that Ben Carson had the best chance of winning.  Mind you, this was not projecting who would win the nomination, but just who would win if they did win the nomination. I was equally incredulous but I still, like many, figured it would likely be Bush or Rubio in the end.

Nonetheless, I continued to generate my monthly forecasts whittling down the matrix as candidates dropped out.  The final forecast that I presented at the American Political Science Association Conference in September gave Trump a 93.8% probability of winning.  It, too, was met with smirks.

Why? Because at that point the consensus had pretty much built up among forecasters that Clinton was going to win. Even those who had models that predicted a Trump victory hedged a bit, some offering explanations of why they were probably going to be wrong.  

I followed the comfort of the herd.

I had a strong methodological reason to do so. There is strong research that shows that the herd is usually right.  If my model was wildly out-of-step with the herd, which this certainly was, I was probably wrong, I figured.

So when the end of September came, I set aside the Long-Range Forecast and focused on the tried and true one that I had been using with Tom Holbrook for years, and on October 3 I announced to the University community and to the local press that Hillary Clinton had a 90% probability of winning the election.  I was comfortable with that.  I was well within the herd and my Democratic friends were relieved that I'd stopped saying that Trump was going to win.  Some of my Republican friends weren't happy, but they didn't really challenge it because I think many of them believed he wouldn't win either.  I think many resigned themselves to the idea that Clinton would win, and outcome expectation polls, which have been shown to be highly predictive of the outcome in the past confirmed that belief. Furthermore, let's face it, I live in Utah. This isn't really what one would consider "Trump Country." Most of them didn't want Trump to be President either.

But of course, the numbers, the expectations, and the herd proved to be wrong. The forecasters who got it right, notably Helmut Norpoth and Alan Lichtmann, preserved their decades long streaks of getting it right with their models. I'd been asked about them in the weeks leading up to the election. I dismissed them as outliers.  Their streaks were about to be broken, I said, because the data clearly pointed in another direction. If I'd had more faith in myself, and in the numbers from my Long-Range model, then I would be writing a very different post here.

But I take this now, as it should to all who rely on data, as a simple but important reminder: 

Sometimes the data, and the herd, can be wrong.  

Nate Silver was right. He got his forecast wrong like most of us did, but he was absolutely right to point out that there was a lot more uncertainty in the numbers than many of us were saying. It's something that those of us who work with data need to remember: the result is only as good as the data that goes into it. It is incumbent upon each of us who do this to hold onto a healthy respect for that fact as we go back to the drawing board, figure out what went wrong, fix it, and try again.

Just like we always do, because that's how science works. And leave the therapy work to therapists.

Tuesday, November 8, 2016

The Paths to 270 - What To Watch on Election Night

The day is here. Now we get to see if all the speculation and data crunching we all went through actually gave us an accurate picture of the eventual outcome. As you watch the election results this evening, the real question is "When will we know who will win?"

That's difficult to say for certain, but at least you can get a sense of what results will give us an indication of what is going to happen as the night goes on.  I've tried to lay out here what I think are the most likely scenarios based on our forecast, and when we'll start to see indications of whether or not it's going to go the way we think it's going to go.

The table below lays out the Electoral College landscape based on the state win probabilities generated by the latest run of Tom Holbrook's and my election forecast model.  It presents what our model suggests are the most likely paths that either Hillary Clinton or Donald Trump will need to take in order to get to the to the 270 Electoral Votes necessary to win the presidency.  Things have not changed significantly since my last post, so Trump's challenge remains significant.

The states are arranged in order of the probability that the model predicts that they will be won by Secretary Clinton.  Based on these results, it is easy to see that Donald Trump's path to 270 is a little more challenging than that for Hillary Clinton.  The model suggests that Colorado is the key tipping point state, the state that will put either candidate over the top in the Electoral College.  The good news for Hillary Clinton is that the model suggests she has an 85.2% probability of winning that state.  The model suggests that if she can win there, she can win the White House even while losing I'm calling The Key Six battleground states of Florida, Iowa, Nevada, North Carolina, Ohio, and Virginia.

For Trump, however, his options are a bit more limited. Not only does he need to win all the states that the model projects he will win but also six states the model suggests will be won by Secretary Clinton.  If Trump loses just one of these states, his chances of winning become substantially more difficult.  To win the election, then, he would have to pick off states that the model suggests are more firmly in Clinton's column.

States to Watch, and When

7:00 PM EST

Polls in New Hampshire, and Virginia close at this time, and they may give us an early indication of how the night will go. We've got Virginia at a 91% probability of being won by Clinton. If it is close and drags long into the night it will suggest that the model may have significantly underestimated Donald Trump's level of support and could spell real trouble for Hillary Clinton.  

New Hampshire is an interesting piece of the puzzle.  We give Clinton a 91.6% probability of winning the state (and we've never had a state prediction be wrong when its probability has been over 90%), but recent polls there suggested that it might be closer than it had been earlier.  If it turns out the model is wrong and Trump actually wins New Hampshire, it could also open up more opportunity for him to win.  Without New Hampshire, Clinton would need to not only win Colorado but another state in the gray zone, while not losing any of the other states we are considering to be likely in her column.

On the flip side, Georgia could also be an early indication.  We give Trump an 87.6% chance of winning there, but it had been discussed a few weeks ago as creeping in Clinton's direction.  It doesn't look like Clinton will pull off an upset there, but like Virginia, if it ends up being too close to call for very long tonight, then it may suggest that Clinton may have an even larger victory than we are expecting.

7:30 PM EST

When the polls close in North Carolina and Ohio, we should get a very good indication of what Trump's chances of winning will be.  We have both of them in Clinton's column, but only marginally so. According to our model, these are two of the states from The Key Six in the gray zone that Trump has the greatest chance of picking off.  If he can pick up BOTH of them, his chance of winning go up significantly.  Look to see if they are early calls in either direction.  I don't expect them to be, but if they are, it's really good news for whoever it is for. Trump has a slight edge in the Ohio polling average we use for our model, but Ohio's history and Clinton's lead in the national polls keep it ever so slightly in her column for us. So it, for us, is truly a tossup.

8:00 PM EST

Polls in all the of the counties in Florida will close at this time, as do Pennsylvania and and Michigan.  Without Florida, it's hard to see how Trump gets across the finish line unless he wins both Michigan and Pennsylvania.  We've got Clinton winning all three, but Florida is expected to be close as it always is.  Trump hit Michigan and Pennsylvania hard at the end, and polls have been tightening in both states, moreso in Michigan. Don't expect Florida to be called early.  But if Michigan and Pennsylvania do, they will most likely be called for Clinton and that is very bad news for Trump because it will likely mean that his "Rust Belt" strategy will fail, in which case he will NEED Florida.

9:00 PM EST

Colorado and Wisconsin will be key at this time. We've got Wisconsin firmly in Clinton's column at 93.9%, but it could be Trump's last gasp at a Rust Belt Strategy, especially if he loses Florida.  But he's winning in Wisconsin, he's probably also got Florida in hand as well.  Colorado is the bigger question.  As we pointed out, it's the tipping point: the key state that gets Clinton to 270.  That, of course, assumes that she gets all the other states we give her a higher probability of winning, especially Michigan and New Hampshire.  If we're still waiting to see what Colorado will do, that's probably good news for Trump.

Arizona could also be a sleeper here as well. We've got it in Trump's column, with a 79.6% probability of him winning there. But it's gotten some attention for being a red state that Clinton could win. I don't expect that to happen, but look to see what happens here. If it takes a while for a call to be made here, that could be an indication of a Clinton landslide. Even so, we'll probably see indications of that being true even before then, especially if Georgia goes her way earlier in the evening.

10:00 PM EST

Iowa and Nevada are the last of The Key Six to close. We've got Clinton winning both, but Iowa is the one that is the most in doubt. Trump had the slight edge in the polling average there but, like Ohio, our model gives the edge to Clinton due to the history and national poll variable. A bigger issue will be Nevada.  It's possible that we'll already know by the time the polls close in Nevada if Trump can get to 270, but if not, it could be key.

Clinton Clinton Win
Probability
Trump
State Electoral
Votes
Cumulative
EV
Cumulative
EV
Electoral
Votes
State
DC 3 3 100%
Vermont 3 6 100%
Hawaii 4 10 100%
Massachusetts 11 21 100%
California 55 76 100%
New York 29 105 100%
Maryland 10 115 100%
Rhode Island 4 119 100%
Illinois 20 139 100%
New Jersey 14 153 100%
Connecticut 7 160 100%
Delaware 3 163 100%
Washington 12 175 99.9%
Maine 4 179 99.8%
Oregon 7 186 99.6%
Michigan 16 202 97.7%
New Mexico 5 207 97.1%
Minnesota 10 217 96.9%
Wisconsin 10 227 93.9%
Pennsylvania 20 247 92.6%
New Hampshire 4 251 91.6%
Virginia 13 264 91.0%
Colorado 9 273 85.2% 274 9 Colorado
Nevada 6 279 76.1% 265 6 Nevada
Florida 29 308 69.7% 259 29 Florida
Ohio 18 326 56.5% 230 18 Ohio
Iowa 6 332 54.1% 212 6 Iowa
North Carolina 15 347 51.2% 206 15 North Carolina
  20.4% 191 11 Arizona
  12.4% 180 16 Georgia
  4.1% 164 10 Missouri
  2.1% 154 11 South Carolina
  0.5% 143 9 Indiana
  0.3% 134 38 Texas
  0.1% 96 6 Mississippi
  0.1% 90 3 Alaska
  0.1% 87 11 Tennessee
  0.0% 76 8 Louisiana
  0.0% 65 3 Montana
  0.0% 65 6 Kansas
  0.0% 59 3 South Dakota
  0.0% 56 6 Arkansas
  0.0% 50 9 Alabama
  0.0% 41 8 Kentucky
  0.0% 33 3 North Dakota
  0.0% 30 4 Nebraska
  0.0% 25 5 West Virginia
  0.0% 20 6 Utah
  0.0% 14 7 Oklahoma
  0.0% 7 4 Idaho
  0.0% 3 3 Wyoming