Eliciting Expertise without Verification

02/22/2018
by   Yuqing Kong, et al.
0

A central question of crowd-sourcing is how to elicit expertise from agents. This is even more difficult when answers cannot be directly verified. A key challenge is that sophisticated agents may strategically withhold effort or information when they believe their payoff will be based upon comparison with other agents whose reports will likely omit this information due to lack of effort or expertise. Our work defines a natural model for this setting based on the assumption that more sophisticated agents know the beliefs of less sophisticated agents. We then provide a mechanism design framework for this setting. From this framework, we design several novel mechanisms, for both the single and multiple question settings, that (1) encourage agents to invest effort and provide their information honestly; (2) output a correct "hierarchy" of the information when agents are rational.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset