Change search
ReferencesLink to record
Permanent link

Direct link
SmartWiki: A reliable and conflict-refrained Wiki model based on reader differentiation and social context analysis
Blekinge Institute of Technology, School of Computing.
Blekinge Institute of Technology, School of Computing.
2013 (English)In: Knowledge-Based Systems, ISSN 0950-7051, Vol. 47, 53-64 p.Article in journal (Refereed) Published
Abstract [en]

Wiki systems, such as Wikipedia, provide a multitude of opportunities for large-scale online knowledge collaboration. Despite Wikipedia's successes with the open editing model, dissenting voices give rise to unreliable content due to conflicts amongst contributors. Frequently modified controversial articles by dissent editors hardly present reliable knowledge. Some overheated controversial articles may be locked by Wikipedia administrators who might leave their own bias in the topic. It could undermine both the neutrality and freedom policies of Wikipedia. As Richard Rorty suggested "Take Care of Freedom and Truth Will Take Care of Itself"[1], we present a new open Wiki model in this paper, called TrustWiki, which bridge readers closer to the reliable information while allowing editors to freely contribute. From our perspective, the conflict issue results from presenting the same knowledge to all readers, without regard for the difference of readers and the revealing of the underlying social context, which both causes the bias of contributors and affects the knowledge perception of readers. TrustWiki differentiates two types of readers, "value adherents" who prefer compatible viewpoints and "truth diggers" who crave for the truth. It provides two different knowledge representation models to cater for both types of readers. Social context, including social background and relationship information, is embedded in both knowledge representations to present readers with personalized and credible knowledge. To our knowledge, this is the first paper on knowledge representation combining both psychological acceptance and truth reveal to meet the needs of different readers. Although this new Wiki model focuses on reducing conflicts and reinforcing the neutrality policy of Wikipedia, it also casts light on the other content reliability problems in Wiki systems, such as vandalism and minority opinion suppression.

Place, publisher, year, edition, pages
Elsevier , 2013. Vol. 47, 53-64 p.
Keyword [en]
Community discovery, Confirmation bias, Knowledge representation, Natural language generation, Online social network, Trust, Wikipedia
National Category
Computer Science
URN: urn:nbn:se:bth-6895DOI: 10.1016/j.knosys.2013.03.014ISI: 000320351100005Local ID: diva2:834449
Available from: 2013-09-10 Created: 2013-05-27 Last updated: 2015-06-30Bibliographically approved

Open Access in DiVA

No full text

Other links

Publisher's full text

Search in DiVA

By author/editor
Johnson, HenricWu, Felix
By organisation
School of Computing
In the same journal
Knowledge-Based Systems
Computer Science

Search outside of DiVA

GoogleGoogle Scholar

Altmetric score

Total: 23 hits
ReferencesLink to record
Permanent link

Direct link