On Stackelberg Signaling and its Impact on Receiver's Trust in Personalized Recommender Systems
Recommender systems have relied on many intelligent technologies (e.g. machine learning) which have procured credibility issues due to several concerns ranging from lack of privacy and accountability, biases and their inherent design complexity. Given this lack of understanding of how recommender systems work, users strategically interact with such systems via accepting any information with a grain of salt. Furthermore, the recommender system evaluates choices based on a different utilitarian framework, which can be fundamentally different from the user's rationality. Therefore, in this paper, we model such an interaction between the recommender system and a human user as a Stackelberg signaling game, where both the agents are modeled as expected-utility maximizers with non-identical prior beliefs about the choice rewards. We compute the equilibrium strategies at both the system and the user, and investigate conditions under which (i) the recommender system reveals manipulated information, and (ii) trust regarding the recommender system deteriorates when the true rewards are realized at the user.
READ FULL TEXT