Back to Search Start Over

Bias and Controversy in Evaluation Systems.

Authors :
Lauw, Hady W.
Ee-Peng Lim
Ke Wang
Source :
IEEE Transactions on Knowledge & Data Engineering; Nov2008, Vol. 20 Issue 11, p1490-1504, 15p, 3 Black and White Photographs, 5 Charts, 10 Graphs
Publication Year :
2008

Abstract

Evaluation is prevalent in real life. With the advent of Web 2.0, online evaluation has become an important feature in many applications that involve information (e.g., video, photo, and audio) sharing and social networking (e.g., blogging). In these evaluation settings, a set of reviewers assign scores to a set of objects. As part of the evaluation analysis, we want to obtain fair reviews for all the given objects. However, the reality is that reviewers may deviate in their scores assigned to the same object, due to the potential "bias" of reviewers or "controversy" of objects. The statistical approach of averaging deviations to determine bias and controversy assumes that all reviewers and objects should be given equal weight. In this paper, we look beyond this assumption and propose an approach based on the following observations: 1) evaluation is "subjective," as reviewers and objects have varying bias and controversy, respectively, and 2) bias and controversy are mutually dependent. These observations underlie our proposed reinforcement-based model to determine bias and controversy simultaneously. Our approach also quantifies "evidence," which reveals the degree of confidence with which bias and controversy have been derived. This model is shown to be effective by experiments on real-life and synthetic data sets. [ABSTRACT FROM AUTHOR]

Details

Language :
English
ISSN :
10414347
Volume :
20
Issue :
11
Database :
Complementary Index
Journal :
IEEE Transactions on Knowledge & Data Engineering
Publication Type :
Academic Journal
Accession number :
34984041
Full Text :
https://doi.org/10.1109/TKDE.2008.77