Change search
Link to record
Permanent link

Direct link
Publications (10 of 39) Show all publications
Zabardast, E., Gonzalez-Huerta, J., Gorschek, T., Šmite, D., Alégroth, E. & Fagerholm, F. (2023). A taxonomy of assets for the development of software-intensive products and services. Journal of Systems and Software, 202, Article ID 111701.
Open this publication in new window or tab >>A taxonomy of assets for the development of software-intensive products and services
Show others...
2023 (English)In: Journal of Systems and Software, ISSN 0164-1212, E-ISSN 1873-1228, Vol. 202, article id 111701Article in journal (Refereed) Published
Abstract [en]

Context:Developing software-intensive products or services usually involves a plethora of software artefacts. Assets are artefacts intended to be used more than once and have value for organisations; examples include test cases, code, requirements, and documentation. During the development process, assets might degrade, affecting the effectiveness and efficiency of the development process. Therefore, assets are an investment that requires continuous management.

Identifying assets is the first step for their effective management. However, there is a lack of awareness of what assets and types of assets are common in software-developing organisations. Most types of assets are understudied, and their state of quality and how they degrade over time have not been well-understood.

Methods:We performed an analysis of secondary literature and a field study at five companies to investigate and identify assets to fill the gap in research. The results were analysed qualitatively and summarised in a taxonomy.

Results:We present the first comprehensive, structured, yet extendable taxonomy of assets, containing 57 types of assets.

Conclusions:The taxonomy serves as a foundation for identifying assets that are relevant for an organisation and enables the study of asset management and asset degradation concepts.

Place, publisher, year, edition, pages
Elsevier, 2023
Keywords
Assets in software engineering, Asset management in software engineering, Assets for software-intensive products or services, Taxonomy
National Category
Software Engineering
Identifiers
urn:nbn:se:bth-24426 (URN)10.1016/j.jss.2023.111701 (DOI)000984121100001 ()2-s2.0-85152899759 (Scopus ID)
Funder
Knowledge Foundation, 20170176Knowledge Foundation, 20180010
Available from: 2023-04-11 Created: 2023-04-11 Last updated: 2023-06-02Bibliographically approved
Yu, L., Alégroth, E., Chatzipetrou, P. & Gorschek, T. (2023). Automated NFR testing in continuous integration environments: a multi-case study of Nordic companies. Empirical Software Engineering, 28(6), Article ID 144.
Open this publication in new window or tab >>Automated NFR testing in continuous integration environments: a multi-case study of Nordic companies
2023 (English)In: Empirical Software Engineering, ISSN 1382-3256, E-ISSN 1573-7616, Vol. 28, no 6, article id 144Article in journal (Refereed) Published
Abstract [en]

Context: Non-functional requirements (NFRs) (also referred to as system qualities) are essential for developing high-quality software. Notwithstanding its importance, NFR testing remains challenging, especially in terms of automation. Compared to manual verification, automated testing shows the potential to improve the efficiency and effectiveness of quality assurance, especially in the context of Continuous Integration (CI). However, studies on how companies manage automated NFR testing through CI are limited. Objective: This study examines how automated NFR testing can be enabled and supported using CI environments in software development companies. Method: We performed a multi-case study at four companies by conducting 22 semi-structured interviews with industrial practitioners. Results: Maintainability, reliability, performance, security and scalability, were found to be evaluated with automated tests in CI environments. Testing practices, quality metrics, and challenges for measuring NFRs were reported. Conclusions: This study presents an empirically derived model that shows how data produced by CI environments can be used for evaluation and monitoring of implemented NFR quality. Additionally, the manuscript presents explicit metrics, CI components, tools, and challenges that shall be considered while performing NFR testing in practice. © 2023, The Author(s).

Place, publisher, year, edition, pages
Springer, 2023
Keywords
Automated testing, Case study, CI, Continuous integration, Metrics, NFR, Non-functional requirements, Automation, Integration, Integration testing, Quality control, Software design, Case-studies, Continuous integrations, Integration environments, Metric, Nordic companies, System quality, Quality assurance
National Category
Software Engineering
Identifiers
urn:nbn:se:bth-25556 (URN)10.1007/s10664-023-10356-1 (DOI)001087927600001 ()2-s2.0-85174862814 (Scopus ID)
Funder
Knowledge Foundation, 20180010Knowledge Foundation, 20170213
Available from: 2023-11-06 Created: 2023-11-06 Last updated: 2024-01-02Bibliographically approved
Bauer, A., Coppola, R., Alégroth, E. & Gorschek, T. (2023). Code review guidelines for GUI-based testing artifacts. Information and Software Technology, 163, Article ID 107299.
Open this publication in new window or tab >>Code review guidelines for GUI-based testing artifacts
2023 (English)In: Information and Software Technology, ISSN 0950-5849, E-ISSN 1873-6025, Vol. 163, article id 107299Article, review/survey (Refereed) Published
Abstract [en]

Context: Review of software artifacts, such as source or test code, is a common practice in industrial practice. However, although review guidelines are available for source and low-level test code, for GUI-based testing artifacts, such guidelines are missing. Objective: The goal of this work is to define a set of guidelines from literature about production and test code, that can be mapped to GUI-based testing artifacts. Method: A systematic literature review is conducted, using white and gray literature to identify guidelines for source and test code. These synthesized guidelines are then mapped, through examples, to create actionable, and applicable, guidelines for GUI-based testing artifacts. Results: The results of the study are 33 guidelines, summarized in nine guideline categories, that are successfully mapped as applicable to GUI-based testing artifacts. Of the collected literature, only 10 sources contained test-specific code review guidelines. These guideline categories are: perform automated checks, use checklists, provide context information, utilize metrics, ensure readability, visualize changes, reduce complexity, check conformity with the requirements and follow design principles and patterns. Conclusion: This pivotal set of guidelines provides an industrial contribution in filling the gap of general guidelines for review of GUI-based testing artifacts. Additionally, this work highlights, from an academic perspective, the need for future research in this area to also develop guidelines for other specific aspects of GUI-based testing practice, and to take into account other facets of the review process not covered by this work, such as reviewer selection. © 2023 The Author(s)

Place, publisher, year, edition, pages
Elsevier, 2023
Keywords
Code review, GUI testing, GUI-based testing, Guidelines, Modern code review, Practices, Software testing, Graphical user interfaces, Guideline, Practice, Software testings, Source codes, Test code
National Category
Software Engineering
Identifiers
urn:nbn:se:bth-25235 (URN)10.1016/j.infsof.2023.107299 (DOI)001051358500001 ()2-s2.0-85165535690 (Scopus ID)
Funder
Knowledge Foundation, 20180010
Available from: 2023-08-08 Created: 2023-08-08 Last updated: 2023-09-18Bibliographically approved
Nygren, Å., Alégroth, E., Eriksson, A. & Pettersson, E. (2023). Does Previous Experience with Online Platforms Matter? A Survey about Online Learning across Study Programs. Education Sciences, 13(2), Article ID 181.
Open this publication in new window or tab >>Does Previous Experience with Online Platforms Matter? A Survey about Online Learning across Study Programs
2023 (English)In: Education Sciences, E-ISSN 2227-7102, Vol. 13, no 2, article id 181Article in journal (Refereed) Published
Abstract [en]

The COVID-19 pandemic has had a dramatic effect on society, including teaching within higher education that was forced to adapt to online teaching. Research on this phenomenon has looked at pedagogical methods as well as student perceptions of this way of teaching. However, to the best of our knowledge, no studies have looked at the wider perspective, within the entire student populous of a university, what students’ perceptions are and how these correlate with the students’ previous experiences and habits with online platforms, e.g., online streaming or social media. In this study, we perform a questionnaire survey with 431 responses with students from 20 programs at Blekinge Institute of technology. The survey responses are analyzed using descriptive statistics and qualitative analysis to draw its conclusions. Results show that there is no correlation between previous habits and student experience with online platforms in relation to online learning. Instead, other factors, e.g., teacher engagement, is found central for student learning and therefore important to consider for future research and development of online teaching methodologies. © 2023 by the authors.

Place, publisher, year, edition, pages
MDPI, 2023
Keywords
COVID-19 pandemic, online learning, post-pandemic, student perception, survey
National Category
Pedagogy
Identifiers
urn:nbn:se:bth-24362 (URN)10.3390/educsci13020181 (DOI)000939006400001 ()2-s2.0-85148756113 (Scopus ID)
Funder
Knowledge Foundation, 20180010Knowledge Foundation, 20210026
Available from: 2023-03-09 Created: 2023-03-09 Last updated: 2023-03-27Bibliographically approved
Coppola, R., Fulcini, T., Ardito, L., Torchiano, M. & Alégroth, E. (2023). On Effectiveness and Efficiency of Gamified Exploratory GUI Testing. IEEE Transactions on Software Engineering
Open this publication in new window or tab >>On Effectiveness and Efficiency of Gamified Exploratory GUI Testing
Show others...
2023 (English)In: IEEE Transactions on Software Engineering, ISSN 0098-5589, E-ISSN 1939-3520Article in journal (Refereed) Published
Abstract [en]

Context: Gamification appears to improve enjoyment and quality of execution of software engineering activities, including software testing. Though commonly employed in industry, manual exploratory testing of web application GUIs was proven to be mundane and expensive. Gamification applied to that kind of testing activity has the potential to overcome its limitations, though no empirical research has explored this area yet.

Goal: Collect preliminary insights on how gamification, when performed by novice testers, affects the effectiveness, efficiency, test case realism, and user experience in exploratory testing of web applications.

Method: Common gamification features augment an existing exploratory testing tool: Final Score with Leaderboard, Injected Bugs, Progress Bar, and Exploration Highlights. The original tool and the gamified version are then compared in an experiment involving 144 participants. User experience is elicited using the Technology Acceptance Model (TAM) questionnaire instrument.

Results: Statistical analysis identified several significant differences for metrics that represent the effectiveness and efficiency of tests showing an improvement in coverage when they were developed with gamification. Additionally, user experience is improved with gamification.

Conclusions: Gamification of exploratory testing has a tangible effect on how testers create test cases for web applications. While the results are mixed, the effects are most beneficial and interesting and warrant more research in the future. Further research shall be aimed at confirming the presented results in the context of state-of-the-art testing tools and real-world development environments. 

Place, publisher, year, edition, pages
Institute of Electrical and Electronics Engineers (IEEE), 2023
Keywords
Games, Gamification, Graphical user interfaces, Manuals, Software, Software testing, Task analysis, User experience, Web Application Testing, Application programs, Efficiency, Job analysis, Exploratory testing, Game, Manual, Software testings, Users' experiences
National Category
Software Engineering
Identifiers
urn:nbn:se:bth-25929 (URN)10.1109/TSE.2023.3348036 (DOI)2-s2.0-85181571816 (Scopus ID)
Available from: 2024-01-29 Created: 2024-01-29 Last updated: 2024-01-30Bibliographically approved
Lind, E., Gonzalez-Huerta, J. & Alégroth, E. (2023). Requirements Quality vs. Process and Stakeholders’ Well-Being: A Case of a Nordic Bank. In: Mendez D., Winkler D., Winkler D., Kross J., Biffl S., Bergsmann J. (Ed.), Software Quality: Higher Software Quality through Zero Waste Development. Paper presented at 15th International Conference on Software Quality, SWQD 2023, Munich, 23 May 2023 25 May 2023 (pp. 17-37). Springer Science+Business Media B.V., 472
Open this publication in new window or tab >>Requirements Quality vs. Process and Stakeholders’ Well-Being: A Case of a Nordic Bank
2023 (English)In: Software Quality: Higher Software Quality through Zero Waste Development / [ed] Mendez D., Winkler D., Winkler D., Kross J., Biffl S., Bergsmann J., Springer Science+Business Media B.V., 2023, Vol. 472, p. 17-37Conference paper, Published paper (Refereed)
Abstract [en]

Requirements are key artefacts to describe the intended purpose of a software system. The quality of requirements is crucial for deciding what to do next, impacting the development process’ effectiveness and efficiency. However, we know very little about the connection between practitioners’ perceptions regarding requirements quality and its impact on the process or the feelings of the professionals involved in the development process. Objectives: This study investigates: i) How software development practitioners define requirements quality, ii) how the perceived quality of requirements impact process and stakeholders’ well-being, and iii) what are the causes and potential solutions for poor-quality requirements. Method: This study was performed as a descriptive interview study at a sub-organization of a Nordic bank that develops its own web and mobile apps. The data collection comprises interviews with 20 practitioners, including requirements engineers, developers, testers, and newly employed developers, with five interviewees from each group. Results: The results show that different roles have different views on what makes a requirement good quality. Participants highlighted that, in general, they experience negative emotions, more work, and overhead communication when they work with requirements they perceive to be of poor quality. The practitioners also describe positive effects on their performance and positive feelings when they work with requirements that they perceive to be good. © 2023, The Author(s), under exclusive license to Springer Nature Switzerland AG.

Place, publisher, year, edition, pages
Springer Science+Business Media B.V., 2023
Series
Lecture Notes in Business Information Processing, ISSN 1865-1348, E-ISSN 1865-1356 ; 472
Keywords
Empirical Study, Human Factors, Requirements Engineering, Requirements Quality, Behavioral research, Human engineering, Software design, Development process, Effectiveness and efficiencies, Empirical studies, Perceived quality, Process effectiveness, Process efficiency, Requirement engineering, Requirement quality, Software-systems, Well being
National Category
Software Engineering
Identifiers
urn:nbn:se:bth-24977 (URN)10.1007/978-3-031-31488-9_2 (DOI)2-s2.0-85161106173 (Scopus ID)9783031314872 (ISBN)
Conference
15th International Conference on Software Quality, SWQD 2023, Munich, 23 May 2023 25 May 2023
Available from: 2023-06-26 Created: 2023-06-26 Last updated: 2023-06-26Bibliographically approved
Nass, M., Alégroth, E., Feldt, R. & Coppola, R. (2023). Robust web element identification for evolving applications by considering visual overlaps. In: Proceedings - 2023 IEEE 16th International Conference on Software Testing, Verification and Validation, ICST 2023: . Paper presented at 16th IEEE International Conference on Software Testing, Verification and Validation, ICST 2023, Dublin, 16 April 2023 through 20 April 2023 (pp. 258-268). Institute of Electrical and Electronics Engineers (IEEE)
Open this publication in new window or tab >>Robust web element identification for evolving applications by considering visual overlaps
2023 (English)In: Proceedings - 2023 IEEE 16th International Conference on Software Testing, Verification and Validation, ICST 2023, Institute of Electrical and Electronics Engineers (IEEE), 2023, p. 258-268Conference paper, Published paper (Refereed)
Abstract [en]

Fragile (i.e., non-robust) test execution is a common challenge for automated GUI-based testing of web applications as they evolve. Despite recent progress, there is still room for improvement since test execution failures caused by technical limitations result in unnecessary maintenance costs that limit its effectiveness and efficiency. One of the most reported technical challenges for web-based tests concerns how to reliably locate a web element used by a test script.This paper proposes the novel concept of Visually Overlapping Nodes (VON) that reduces fragility by utilizing the phenomenon that visual web elements (observed by the user) are constructed from multiple web-elements in the Document Object Model (DOM) that overlaps visually.We demonstrate the approach in a tool, VON Similo, which extends the state-of-the-art multi-locator approach (Similo) that is also used as the baseline for an experiment. In the experiment, a ground truth set of 1163 manually collected web element pairs, from different releases of the 40 most popular web applications on the internet, are used to compare the approaches' precision, recall, and accuracy.Our results show that VON Similo provides 94.7% accuracy in identifying a web element in a new release of the same SUT. In comparison, Similo provides 83.8% accuracy.These results demonstrate the applicability of the visually overlapping nodes concept/tool for web element localization in evolving web applications and contribute a novel way of thinking about web element localization in future research on GUI-based testing. © 2023 IEEE.

Place, publisher, year, edition, pages
Institute of Electrical and Electronics Engineers (IEEE), 2023
Keywords
component, formatting, insert, style, styling
National Category
Software Engineering
Identifiers
urn:nbn:se:bth-25063 (URN)10.1109/ICST57152.2023.00032 (DOI)001009201200025 ()2-s2.0-85161886412 (Scopus ID)9781665456661 (ISBN)
Conference
16th IEEE International Conference on Software Testing, Verification and Validation, ICST 2023, Dublin, 16 April 2023 through 20 April 2023
Available from: 2023-06-30 Created: 2023-06-30 Last updated: 2023-11-22Bibliographically approved
Nass, M., Alégroth, E., Feldt, R., Leotta, M. & Ricca, F. (2023). Similarity-based Web Element Localization for Robust Test Automation. ACM Transactions on Software Engineering and Methodology, 32(3), Article ID 75.
Open this publication in new window or tab >>Similarity-based Web Element Localization for Robust Test Automation
Show others...
2023 (English)In: ACM Transactions on Software Engineering and Methodology, ISSN 1049-331X, E-ISSN 1557-7392, Vol. 32, no 3, article id 75Article in journal (Refereed) Published
Abstract [en]

Non-robust (fragile) test execution is a commonly reported challenge in GUI-based test automation, despite much research and several proposed solutions. A test script needs to be resilient to (minor) changes in the tested application but, at the same time, fail when detecting potential issues that require investigation. Test script fragility is a multi-faceted problem. However, one crucial challenge is how to reliably identify and locate the correct target web elements when the website evolves between releases or otherwise fail and report an issue. This article proposes and evaluates a novel approach called similarity-based web element localization (Similo), which leverages information from multiple web element locator parameters to identify a target element using a weighted similarity score. This experimental study compares Similo to a baseline approach for web element localization. To get an extensive empirical basis, we target 48 of the most popular websites on the Internet in our evaluation. Robustness is considered by counting the number of web elements found in a recent website version compared to how many of these existed in an older version. Results of the experiment show that Similo outperforms the baseline; it failed to locate the correct target web element in 91 out of 801 considered cases (i.e., 11%) compared to 214 failed cases (i.e., 27%) for the baseline approach. The time efficiency of Similo was also considered, where the average time to locate a web element was determined to be 4 milliseconds. However, since the cost of web interactions (e.g., a click) is typically on the order of hundreds of milliseconds, the additional computational demands of Similo can be considered negligible. This study presents evidence that quantifying the similarity between multiple attributes of web elements when trying to locate them, as in our proposed Similo approach, is beneficial. With acceptable efficiency, Similo gives significantly higher effectiveness (i.e., robustness) than the baseline web element localization approach.

Place, publisher, year, edition, pages
Association for Computing Machinery (ACM), 2023
Keywords
GUI testing, test automation, test case robustness, web element locators, XPath locators
National Category
Software Engineering
Identifiers
urn:nbn:se:bth-25065 (URN)10.1145/3571855 (DOI)001002573400020 ()
Funder
Knowledge Foundation, 20180010Swedish Research Council, 2015-04913Swedish Research Council, 2020-05272
Available from: 2023-06-30 Created: 2023-06-30 Last updated: 2023-11-22Bibliographically approved
Bauer, A. & Alégroth, E. (2023). We Tried and Failed: An Experience Report on a Collaborative Workflow for GUI-based Testing. In: Proceedings - 2023 IEEE 16th International Conference on Software Testing, Verification and Validation Workshops, ICSTW: . Paper presented at 16th IEEE International Conference on Software Testing, Verification and Validation Workshops, ICSTW 2023, Dublin, 16 April through 20 April 2023 (pp. 1-9). Institute of Electrical and Electronics Engineers (IEEE)
Open this publication in new window or tab >>We Tried and Failed: An Experience Report on a Collaborative Workflow for GUI-based Testing
2023 (English)In: Proceedings - 2023 IEEE 16th International Conference on Software Testing, Verification and Validation Workshops, ICSTW, Institute of Electrical and Electronics Engineers (IEEE), 2023, p. 1-9Conference paper, Published paper (Refereed)
Abstract [en]

Modern software development is a team-based effort supported by tools, processes, and practices. One integral part is automated testing, where developers incorporate automated tests on multiple levels of system abstraction, from low-level unit tests to high-level system tests and Graphical User Interface (GUI) tests. Furthermore, the common practices of code reviews allow collaboration on artifacts based on discussions that improve the artifact's quality and to share information within the team. However, the characteristics of GUI-based tests, due to the level of abstraction and visual elements, introduce additional requirements and complexities compared to code or lower-level test code review, delimiting the practice benefits.The objective of this work is to propose a tool-supported workflow that enables active collaboration among stakeholders and improves the efficiency and effectiveness of team-based development of GUI-based tests.To evaluate the workflow, and show proof of concept, a technical demonstrator for merging of GUI-based tests was to be developed. However, during its development, we encountered several unforeseen challenges that forced us to halt its development. We report the negative results from this development and the main challenges we encountered, as well as the rationale and the decisions we took towards this workflow.In conclusion, this work presents a negative research result on a failed attempt to propose a tool-supported workflow that enables active collaboration on GUI-based tests. The outcome and learnings of this work are intended to guide future research and prevent researchers from falling into the same pitfalls we did. © 2023 IEEE.

Place, publisher, year, edition, pages
Institute of Electrical and Electronics Engineers (IEEE), 2023
Series
IEEE International Conference on Software Testing Verification and Validation Workshop, ICSTW, ISSN 2159-4848
Keywords
automated testing, collaborative testing, collaborative workflow, GUI testing, model-based testing, Abstracting, Automation, Model checking, Software design, Software testing, Code review, Experience report, Graphical user interface testing, Integral part, Interface testings, Model based testing, Work-flows, Graphical user interfaces
National Category
Software Engineering
Identifiers
urn:nbn:se:bth-25204 (URN)10.1109/ICSTW58534.2023.00015 (DOI)001009223100001 ()2-s2.0-85163117973 (Scopus ID)9798350333350 (ISBN)
Conference
16th IEEE International Conference on Software Testing, Verification and Validation Workshops, ICSTW 2023, Dublin, 16 April through 20 April 2023
Funder
Knowledge Foundation, 20180010
Available from: 2023-08-06 Created: 2023-08-06 Last updated: 2023-09-21Bibliographically approved
Coppola, R. & Alégroth, E. (2022). A taxonomy of metrics for GUI-based testing research: A systematic literature review. Information and Software Technology, 152, Article ID 107062.
Open this publication in new window or tab >>A taxonomy of metrics for GUI-based testing research: A systematic literature review
2022 (English)In: Information and Software Technology, ISSN 0950-5849, E-ISSN 1873-6025, Vol. 152, article id 107062Article, review/survey (Refereed) Published
Abstract [en]

Context: GUI-based testing is a sub-field of software testing research that has emerged in the last three decades. GUI-based testing techniques focus on verifying the functional conformance of the system under test (SUT) through its graphical user interface. However, despite the research domains growth, studies in the field have low reproducibility and comparability. One observed cause of these phenomena is identified as a lack of research rigor and commonly used metrics, including coverage metrics. Objective: We aim to identify the most commonly used metrics in the field and formulate a taxonomy of coverage metrics for GUI-based testing research. Method: We adopt an evidence-based approach to build the taxonomy through a systematic literature review of studies in the GUI-based testing domain. Identified papers are then analyzed with Open and Axial Coding techniques to identify hierarchical and mutually exclusive categories of metrics with common characteristics, usages, and applications. Results: Through the analysis of 169 papers and 315 metric definitions, we obtained a taxonomy with 55 codes (common names for metrics), 17 metric categories, and 4 higher level categories: Functional Level, GUI Level, Model Level and Code Level. We measure a higher number of mentions of Model and Code level metrics over Functional and GUI level metrics. Conclusions: We propose a taxonomy for use in future GUI-based testing research to improve the general quality of studies in the domain. In addition, the taxonomy is perceived to help enable more replication studies as well as macro-analysis of the current body of research. © 2022 Elsevier B.V.

Place, publisher, year, edition, pages
Elsevier B.V., 2022
Keywords
Coverage metrics, GUI-based testing, Software testing, Software verification and validation, Taxonomies, Testing, Verification, Functional conformances, Research domains, Software testings, Sub fields, Systematic literature review, Systems under tests, Testing technique, Graphical user interfaces
National Category
Software Engineering
Identifiers
urn:nbn:se:bth-23678 (URN)10.1016/j.infsof.2022.107062 (DOI)000858854400003 ()2-s2.0-85137613387 (Scopus ID)
Funder
Knowledge Foundation, 20180010
Available from: 2022-09-23 Created: 2022-09-23 Last updated: 2022-10-14Bibliographically approved
Projects
M.E.T.A. – Modelling Efficient Test Architectures [20180102]; Blekinge Institute of Technology; Publications
Alégroth, E., Petersén, E. & Tinnerholm, J. (2021). A Failed attempt at creating Guidelines for Visual GUI Testing: An industrial case study. In: Proceedings - 2021 IEEE 14th International Conference on Software Testing, Verification and Validation, ICST 2021: . Paper presented at 14th IEEE International Conference on Software Testing, Verification and Validation, ICST 2021, 12 April 2021 through 16 April 2021 (pp. 340-350). Institute of Electrical and Electronics Engineers Inc., Article ID 9438551.
Organisations
Identifiers
ORCID iD: ORCID iD iconorcid.org/0000-0001-7526-3727

Search in DiVA

Show all publications