One of the ongoing themes since the 2016 campaign here has been the role of "valence." Game theorists, like me, like to write about how voters combine policy preferences with preferences over "valence" characteristics, which are the traits that voters supposedly just want intrinsically, like competence and honesty.
Stop laughing.
This all goes back to a 1963 paper by Donald Stokes called, "Spatial Models of Party Competition." Stokes argued that there are positional issues, where voters disagree about the outcomes they wish to achieve, and valence issues, where voters agree about the outcomes they want, like a strong economy, but disagree about how to get there. That got morphed into valence "traits," which are the traits that voters want, like competence and honesty. Sometimes scholarship is a big game of "telephone."
In yesterday's post, I wrote about how incumbents can be punished for natural disasters, and competence is actually relevant here because a competent incumbent may actually respond more effectively to a natural disaster, mitigating the effect. George W. Bush was criticized for not responding effectively to Hurricane Katrina, in part because he did not have competent people in place at FEMA. Instead, he had Michael Brown in charge. Now, it's Trump's turn to face a brutal Hurricane, with Republicans like Senator Bob Corker acknowledging that Trump just isn't competent. We'll have to wait to see how this plays out, though.
Of course, we all want to think that we are competent at everything, but many are prone to overestimating their own competence. We call this the "Dunning-Kruger effect," based on a study called "Unskilled and Unaware of It," by Justin Kruger and David Dunning. Basically, if you don't know what you are doing, you don't know how to assess whether or not you know what you are doing. That's the short version of why everybody thinks that they are better-than-average drivers, which is a mathematical impossibility.
So, how do you know when you are good at something? You need external measures. When those external measures show success... great! Focus on improving what is working. When those external measures show failure... change something. Everybody screws up. Nobody is great at everything, and that is why we need external measures.
So, completely hypothetically, let's say you enter a new environment. Let's say that in your previous environment, you had reason to think that you had been successful, perhaps even based on external measures, but this new environment is one in which things are different in many ways. You don't necessarily know what to expect. You many think you do, but you don't, and your previous skills may not necessarily translate.
1) Don't ignore external measures. If you don't seem to be succeeding (or, "winning"), maybe your previous skills aren't translating.
2) That means course-corrections are in order. Everyone fails once in a while. The difference between the people who ultimately succeed and those who don't is that the former learn from their mistakes. The true cases of Dunning-Kruger are the ones who either fail to recognize their failure, or attribute it to something other than their own screw-ups. Very few people start at a level of competence. You have to work at it, and that requires course-corrections.
3) Recognizing when to do that requires external information. One of the worst things you can do when you enter a new environment, then, is to close yourself off from any criticism in a little cocoon where you either never hear criticism or have your buddies tell you that any criticism is invalid. Some criticism is invalid. Not all is, and you need to learn from the valid criticism so that you can make course-corrections.
Nope, no political references there!
Anyway, welcome to a new (or first) year of college! Or, to other readers, never mind.