Adversarial Machine Learning in Industry: A Systematic Literature ReviewShow others and affiliations
2024 (English)In: Computers & security (Print), ISSN 0167-4048, E-ISSN 1872-6208, Vol. 145, article id 103988Article, review/survey (Refereed) Published
Abstract [en]
Adversarial Machine Learning (AML) discusses the act of attacking and defending Machine Learning (ML) Models, an essential building block of Artificial Intelligence (AI). ML is applied in many software-intensive products and services and introduces new opportunities and security challenges. AI and ML will gain even more attention from the industry in the future, but threats caused by already-discovered attacks specifically targeting ML models are either overseen, ignored, or mishandled. Current AML research investigates attack and defense scenarios for ML in different industrial settings with a varying degree of maturity with regard to academic rigor and practical relevance. However, to the best of our knowledge, a synthesis of the state of academic rigor and practical relevance is missing. This literature study reviews studies in the area of AML in the context of industry, measuring and analyzing each study's rigor and relevance scores. Overall, all studies scored a high rigor score and a low relevance score, indicating that the studies are thoroughly designed and documented but miss the opportunity to include touch points relatable for practitioners. © 2024 The Author(s)
Place, publisher, year, edition, pages
Elsevier, 2024. Vol. 145, article id 103988
Keywords [en]
Adversarial machine learning, Industry, Relevance, Rigor, State of evidence, Industrial research, Building blockes, Machine learning models, Machine-learning, Product and services, Relevance score, Systematic literature review, Machine learning
National Category
Software Engineering
Identifiers
URN: urn:nbn:se:bth-26820DOI: 10.1016/j.cose.2024.103988ISI: 001290393300001Scopus ID: 2-s2.0-85200501059OAI: oai:DiVA.org:bth-26820DiVA, id: diva2:1889637
Part of project
SERT- Software Engineering ReThought, Knowledge Foundation
Funder
Knowledge Foundation, 201800102024-08-162024-08-162024-08-23Bibliographically approved