Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
A survey of surveys on the use of visualization for interpreting machine learning models
Linnéuniversitetet.ORCID iD: 0000-0002-9079-2376
Linnéuniversitetet.ORCID iD: 0000-0002-2901-935X
Linnéuniversitetet.ORCID iD: 0000-0001-6745-4398
Linnéuniversitetet.ORCID iD: 0000-0002-0519-2537
2020 (English)In: Information Visualization, Vol. 19, no 3, p. 207-233Article in journal (Refereed) Published
Abstract [en]

Research in machine learning has become very popular in recent years, with many types of models proposed to comprehend and predict patterns and trends in data originating from different domains. As these models get more and more complex, it also becomes harder for users to assess and trust their results, since their internal operations are mostly hidden in black boxes. The interpretation of machine learning models is currently a hot topic in the information visualization community, with results showing that insights from machine learning models can lead to better predictions and improve the trustworthiness of the results. Due to this, multiple (and extensive) survey articles have been published recently trying to summarize the high number of original research papers published on the topic. But there is not always a clear definition of what these surveys cover, what is the overlap between them, which types of machine learning models they deal with, or what exactly is the scenario that the readers will find in each of them. In this article, we present a metaanalysis (i.e. a ‘‘survey of surveys’’) of manually collected survey papers that refer to the visual interpretation of machine learning models, including the papers discussed in the selected surveys. The aim of our article is to serve both as a detailed summary and as a guide through this survey ecosystem by acquiring, cataloging, and presenting fundamental knowledge of the state of the art and research opportunities in the area. Our results confirm the increasing trend of interpreting machine learning with visualizations in the past years, and that visualization can assist in, for example, online training processes of deep learning models and enhancing trust into machine learning. However, the question of exactly how this assistance should take place is still considered as an open challenge of the visualization community.

Place, publisher, year, edition, pages
Sage Publications, 2020. Vol. 19, no 3, p. 207-233
Keywords [en]
Survey of surveys, literature review, visualization, explainable machine learning, interpretable machine learning, taxonomy, meta-analysis
National Category
Computer Sciences
Identifiers
URN: urn:nbn:se:bth-23878DOI: 10.1177/1473871620904671OAI: oai:DiVA.org:bth-23878DiVA, id: diva2:1710986
Available from: 2022-11-15 Created: 2022-11-15 Last updated: 2022-11-15Bibliographically approved

Open Access in DiVA

No full text in DiVA

Other links

Publisher's full text

Authority records

Jusufi, Ilir

Search in DiVA

By author/editor
Chatzimparmpas, AngelosMartins, Rafael MessiasJusufi, IlirKerren, Andreas
Computer Sciences

Search outside of DiVA

GoogleGoogle Scholar

doi
urn-nbn

Altmetric score

doi
urn-nbn
Total: 45 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf