Wednesday, September 28, 2011

Overconfidence Trumps Accuracy?

This article in last week's Globe and Mail, written by Wency Leung, may be of interest to some of you. It's an interesting study, but also one that raises quite a few questions in terms of the research design, quantitative approach, and conclusions drawn. How would/should we operationalize "overconfidence," I wonder. Is it simply a matter of believing in oneself even when wrong? Is it truly an exclusive category from accuracy? How are the outcomes deemed to be successful or not? What do you guys think?

Here's an excerpt:
The researchers from University of Edinburgh and University of California, San Diego, simulated the effects of overconfidence over generations, using a mathematical model. They compared the outcomes of overconfident strategies against accurate and under-confident strategies, and found that being overconfident frequently works to one’s advantage, as long as the rewards outweigh the risks. People with unbiased, accurate perceptions, on the other hand, usually fare worse, UPI says.

The research suggests that over time, natural selection favours those who have an overly positive self-image over those who are insecure.


  1. This doesn't sound like a quantitative research project. It sounds like a computer simulation. While humans may not be logical, they are subject to logic. Success has the same rewards, whether due to skill or luck. Failure is always bad, even if you predicted it. It is still an interesting idea. I certainly have plenty of anecdotal experience that confirms it. I think it would be interesting to run a human research study and compare the results with those of a computer simulation.

  2. Good point - I wasn't too sure how to describe it, as it's a simulation, but it also does seem that the findings were generated from a sophisticated form of statistical analysis. And they generated their data from measuring simulated behavior and outcomes, but these were surely grounded in social research (findings or theories)... I'd assume the behaviors and outcomes were measured in a pretty reductive yet formalized way (quantifying success and confidence). What would you call it? Perhaps just "experiment" would do? But yes, I agree - really interesting. One of the case study/recommended articles later in the semester (Lofgren, E. T., and Fefferman, N.H. (2007). The untapped potential of virtual game worlds to shed light on real world epidemics. Lancet Infectious Diseases 7(9): 625–629) looks at using real people interacting in a simulated/virtual environment to predict behavior and outcomes, which would add a possible third option. Fascinating stuff!