SmartWiki: A reliable and conflict-refrained Wiki model based on reader differentiation and social context analysis
2013 (English)In: Knowledge-Based Systems, ISSN 0950-7051, Vol. 47, 53-64 p.Article in journal (Refereed) Published
Wiki systems, such as Wikipedia, provide a multitude of opportunities for large-scale online knowledge collaboration. Despite Wikipedia's successes with the open editing model, dissenting voices give rise to unreliable content due to conflicts amongst contributors. Frequently modified controversial articles by dissent editors hardly present reliable knowledge. Some overheated controversial articles may be locked by Wikipedia administrators who might leave their own bias in the topic. It could undermine both the neutrality and freedom policies of Wikipedia. As Richard Rorty suggested "Take Care of Freedom and Truth Will Take Care of Itself", we present a new open Wiki model in this paper, called TrustWiki, which bridge readers closer to the reliable information while allowing editors to freely contribute. From our perspective, the conflict issue results from presenting the same knowledge to all readers, without regard for the difference of readers and the revealing of the underlying social context, which both causes the bias of contributors and affects the knowledge perception of readers. TrustWiki differentiates two types of readers, "value adherents" who prefer compatible viewpoints and "truth diggers" who crave for the truth. It provides two different knowledge representation models to cater for both types of readers. Social context, including social background and relationship information, is embedded in both knowledge representations to present readers with personalized and credible knowledge. To our knowledge, this is the first paper on knowledge representation combining both psychological acceptance and truth reveal to meet the needs of different readers. Although this new Wiki model focuses on reducing conflicts and reinforcing the neutrality policy of Wikipedia, it also casts light on the other content reliability problems in Wiki systems, such as vandalism and minority opinion suppression.
Place, publisher, year, edition, pages
Elsevier , 2013. Vol. 47, 53-64 p.
Community discovery, Confirmation bias, Knowledge representation, Natural language generation, Online social network, Trust, Wikipedia
IdentifiersURN: urn:nbn:se:bth-6895DOI: 10.1016/j.knosys.2013.03.014ISI: 000320351100005Local ID: oai:bth.se:forskinfo19B82E778AB31FCDC1257B780025AB8FOAI: oai:DiVA.org:bth-6895DiVA: diva2:834449