[UPDATE: My phrasing of the problem to turkers is poor. I believe it is important to the problem that the player knows that they'll always be shown an empty box that they didn't choose after making their first guess. If the host opens one of the two remaining boxes randomly, possibly revealing the prize, then I think the odds are in fact 50% of getting the prize by switching your guess. My phrasing does not make this clear. Alas.]
I asked 100 turkers to answer the Monty Hall problem. Most of them got it wrong. However, similar to the baysean truth serum post, I also asked turkers to say how they thought other people would answer it, and this showed some promise of helping identify the correct answer.
The exact question was this:
Imagine there is a prize in one of three boxes. You chose one of the boxes, and then someone opens one of the other two boxes, revealing nothing inside. You then switch your guess to the other unopened box.
What percent chance do you have of this being the box with the prize? [ ]%
What answer do you think most other people will give? [ ]%58 people answered 50%, whereas the correct answer is two thirds. The next most common answer was 100%. (You can see the raw results here.)
However, if we only look at people who correctly predicted how most other people would answer, i.e., people who answered 50% for the second question, then we get 48 people saying 50%, and the next most common answer is 66%. Three people provided this answer, and 4 more people provided answers like 66.66%, 60% and 65%.
Hence, it seems like the baysean truth serum idea may hold promise for extracting the correct answer from a crowd for tricky questions where most people get it wrong. At least, it may give us good reason to investigate a non-majority answer.
This idea came up while talking with Christopher Lin at the University of Washington about Mechanical Turk experiments. He was interested in the problem of getting a correct answer from a crowd when the majority of people provide the wrong answer, e.g., when a problem is tricky, and most people fall for the trick. Yu-An Sun has done some work on this, suggesting the idea of asking for answers in one pass, and then asking people to select from those answers in another pass. The idea is that people may not be tricked as easily if they have options to chose from, since seeing the correct answer may in some way unveil the trick.
I mentioned a similar problem: how do you know if people are lying when it comes to subjective questions, where there is no ground truth at all? I said that the only traction I've encountered for that problem is the bayesian truth serum trick of asking two questions: "what is your opinion?" and "what do you think most other people's opinion will be?".
Christopher then suggested applying this to the brain teaser problem: asking people for their answer on a brain teaser, and also asking them how they think other people will answer it. The idea being that maybe you can identify the "correct minority" if they're answering with some other answer, but correctly predicting the way in which most people will get it wrong, i.e., showing that they are aware of the trick that most people will fall prey to.