Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
On overcoming challenges with GUI-based test automation
Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering. (SERT)ORCID iD: 0000-0002-8569-2290
2024 (English)Doctoral thesis, comprehensive summary (Other academic)
Abstract [en]

Background: Automated testing is widely used in modern software development to check if the software, including its graphical user interface (GUI), meets the expectations in terms of quality and functionality. GUI-based test automation, like other automation, aims to save time and money compared to manual testing without reducing the software quality. While automation has successfully reduced costs for other types of testing (e.g., unit-or integration tests), GUI-based testing has faced technical challenges, some of which have lingered for over a decade. 

Objective: This thesis work aims to contribute to the software engineering body of knowledge by (1) identifying the main challenges in GUI-based test automation and (2) finding technical solutions to mitigate some of the main challenges. One such challenge is to reliably identify GUI elements during test execution to prevent unnecessary repairs. Another problem is the demand for test automation and programming skills when designing stable automated tests at scale. 

Method: We conducted several studies by adopting a multi-methodological approach. First, we performed a systematic literature review to identify the main challenges in GUI-based test automation, followed by multiple studies that propose and evaluate novel approaches to mitigate the main challenges. 

Results: Our first contribution is mapping the challenges in GUI-based test automation reported in academic literature. We mapped the main challenges (i.e. most reported) on a timeline and classified them as essential or accidental. This classification is valuable since future research can focus on the main challenges that we are more likely to mitigate using a technical solution (i.e., accidental). Our second contribution is several approaches that explore novel concepts or advance state-of-the-art techniques to mitigate some of the main accidental challenges. Testing an application through an augmented layer (Augmented Testing) can reduce the demand for test automation and programming skills and mitigate the challenges of creating and maintaining model based tests. Our proposed approach for locating web elements (Similo) can increase the robustness of automated test execution. 

Conclusion: Our results provide alternative approaches and concepts that can mitigate some of the main accidental challenges in GUI-based test automation. With a more robust test execution and tool support for test modeling, we can help reduce the manual labor spent on creating and maintaining automated GUI-based tests. With a reduced cost of automation, testers can focus more on other tasks like requirements, test design, and exploratory testing.

Place, publisher, year, edition, pages
Karlskrona: Blekinge Tekniska Högskola, 2024. , p. 215
Series
Blekinge Institute of Technology Doctoral Dissertation Series, ISSN 1653-2090 ; 2
Keywords [en]
GUI Testing, Test Automation, Augmented Testing, Test Case Robustness, Web Element Locators, Large Language Models
National Category
Software Engineering
Research subject
Software Engineering
Identifiers
URN: urn:nbn:se:bth-25638ISBN: 978-91-7295-473-1 (print)OAI: oai:DiVA.org:bth-25638DiVA, id: diva2:1814039
Public defence
2024-02-06, J1630, Campus Karlskrona, 13:00 (English)
Opponent
Supervisors
Available from: 2023-11-28 Created: 2023-11-22 Last updated: 2024-02-13Bibliographically approved
List of papers
1. Similarity-based Web Element Localization for Robust Test Automation
Open this publication in new window or tab >>Similarity-based Web Element Localization for Robust Test Automation
Show others...
2023 (English)In: ACM Transactions on Software Engineering and Methodology, ISSN 1049-331X, E-ISSN 1557-7392, Vol. 32, no 3, article id 75Article in journal (Refereed) Published
Abstract [en]

Non-robust (fragile) test execution is a commonly reported challenge in GUI-based test automation, despite much research and several proposed solutions. A test script needs to be resilient to (minor) changes in the tested application but, at the same time, fail when detecting potential issues that require investigation. Test script fragility is a multi-faceted problem. However, one crucial challenge is how to reliably identify and locate the correct target web elements when the website evolves between releases or otherwise fail and report an issue. This article proposes and evaluates a novel approach called similarity-based web element localization (Similo), which leverages information from multiple web element locator parameters to identify a target element using a weighted similarity score. This experimental study compares Similo to a baseline approach for web element localization. To get an extensive empirical basis, we target 48 of the most popular websites on the Internet in our evaluation. Robustness is considered by counting the number of web elements found in a recent website version compared to how many of these existed in an older version. Results of the experiment show that Similo outperforms the baseline; it failed to locate the correct target web element in 91 out of 801 considered cases (i.e., 11%) compared to 214 failed cases (i.e., 27%) for the baseline approach. The time efficiency of Similo was also considered, where the average time to locate a web element was determined to be 4 milliseconds. However, since the cost of web interactions (e.g., a click) is typically on the order of hundreds of milliseconds, the additional computational demands of Similo can be considered negligible. This study presents evidence that quantifying the similarity between multiple attributes of web elements when trying to locate them, as in our proposed Similo approach, is beneficial. With acceptable efficiency, Similo gives significantly higher effectiveness (i.e., robustness) than the baseline web element localization approach.

Place, publisher, year, edition, pages
Association for Computing Machinery (ACM), 2023
Keywords
GUI testing, test automation, test case robustness, web element locators, XPath locators
National Category
Software Engineering
Identifiers
urn:nbn:se:bth-25065 (URN)10.1145/3571855 (DOI)001002573400020 ()
Funder
Knowledge Foundation, 20180010Swedish Research Council, 2015-04913Swedish Research Council, 2020-05272
Available from: 2023-06-30 Created: 2023-06-30 Last updated: 2023-11-22Bibliographically approved
2. Robust web element identification for evolving applications by considering visual overlaps
Open this publication in new window or tab >>Robust web element identification for evolving applications by considering visual overlaps
2023 (English)In: Proceedings - 2023 IEEE 16th International Conference on Software Testing, Verification and Validation, ICST 2023, Institute of Electrical and Electronics Engineers (IEEE), 2023, p. 258-268Conference paper, Published paper (Refereed)
Abstract [en]

Fragile (i.e., non-robust) test execution is a common challenge for automated GUI-based testing of web applications as they evolve. Despite recent progress, there is still room for improvement since test execution failures caused by technical limitations result in unnecessary maintenance costs that limit its effectiveness and efficiency. One of the most reported technical challenges for web-based tests concerns how to reliably locate a web element used by a test script.This paper proposes the novel concept of Visually Overlapping Nodes (VON) that reduces fragility by utilizing the phenomenon that visual web elements (observed by the user) are constructed from multiple web-elements in the Document Object Model (DOM) that overlaps visually.We demonstrate the approach in a tool, VON Similo, which extends the state-of-the-art multi-locator approach (Similo) that is also used as the baseline for an experiment. In the experiment, a ground truth set of 1163 manually collected web element pairs, from different releases of the 40 most popular web applications on the internet, are used to compare the approaches' precision, recall, and accuracy.Our results show that VON Similo provides 94.7% accuracy in identifying a web element in a new release of the same SUT. In comparison, Similo provides 83.8% accuracy.These results demonstrate the applicability of the visually overlapping nodes concept/tool for web element localization in evolving web applications and contribute a novel way of thinking about web element localization in future research on GUI-based testing. © 2023 IEEE.

Place, publisher, year, edition, pages
Institute of Electrical and Electronics Engineers (IEEE), 2023
Keywords
component, formatting, insert, style, styling
National Category
Software Engineering
Identifiers
urn:nbn:se:bth-25063 (URN)10.1109/ICST57152.2023.00032 (DOI)001009201200025 ()2-s2.0-85161886412 (Scopus ID)9781665456661 (ISBN)
Conference
16th IEEE International Conference on Software Testing, Verification and Validation, ICST 2023, Dublin, 16 April 2023 through 20 April 2023
Available from: 2023-06-30 Created: 2023-06-30 Last updated: 2023-11-22Bibliographically approved
3. On the Industrial Applicability of Augmented Testing: An Empirical Study
Open this publication in new window or tab >>On the Industrial Applicability of Augmented Testing: An Empirical Study
2020 (English)In: Proceedings - 2020 IEEE 13th International Conference on Software Testing, Verification and Validation Workshops, ICSTW 2020, Institute of Electrical and Electronics Engineers Inc. , 2020, p. 364-371, article id 9155725Conference paper, Published paper (Refereed)
Abstract [en]

Testing applications with graphical user Interfaces (GUI) is an important but also a time-consuming task in practice. Tools and frameworks for GUI test automation can make the test execution more efficient and lower the manual labor required for regression testing. However, the test scripts used for automated GUI-based testing still require a substantial development effort and are often reported as sensitive to change, leading to frequent and costly maintenance. The efficiency of development, maintenance, and evolution of such tests are thereby dependent on the readability of scripts and the ease-of-use of test tools/frameworks in which the test scripts are defined. To address these shortcomings in existing state-of-practice techniques, a novel technique referred to as Augmented Testing (AT) has been proposed. AT is defined as testing the System Under Test (SUT) through an Augmented GUI that superimposes information on top of the SUT GUI. The Augmented GUI can provide the user with hints, test data, or other support while also observing and recording the tester's interactions. For this study, a prototype tool, called Scout, has been used that adheres to the AT concept that is evaluated in an industrial empirical study. In the evaluation, quasi-experiments and questionnaire surveys are performed in two workshops, with 12 practitioners from two Swedish companies (Ericsson and Inceptive). Results show that Scout can be used to create equivalent test cases faster, with statistical significance, than creating automated scripts in two popular state-of-practice tools. The study concludes that AT has cost-value benefits, applies to industrial-grade software, and overcomes several deficiencies of state-of-practice GUI testing technologies in terms of ease-of-use. © 2020 IEEE.

Place, publisher, year, edition, pages
Institute of Electrical and Electronics Engineers Inc., 2020
Keywords
Augmented Testing, Industrial Case Study, System Testing, Test Automation, Automation, Graphical user interfaces, Surveys, Testing, Verification, Automated scripts, Empirical studies, Graphical user interfaces (GUI), Questionnaire surveys, Regression testing, Statistical significance, System under test, Time-consuming tasks, Software testing
National Category
Software Engineering
Identifiers
urn:nbn:se:bth-20530 (URN)10.1109/ICSTW50294.2020.00065 (DOI)000620795100048 ()9781728110752 (ISBN)
Conference
13th IEEE International Conference on Software Testing, Verification and Validation Workshops, ICSTW 2020, Porto, Portugal, 23 March 2020 through 27 March 2020
Available from: 2020-10-09 Created: 2020-10-09 Last updated: 2023-11-22Bibliographically approved
4. Why many challenges with GUI test automation (will) remain
Open this publication in new window or tab >>Why many challenges with GUI test automation (will) remain
2021 (English)In: Information and Software Technology, ISSN 0950-5849, E-ISSN 1873-6025, Vol. 138, article id 106625Article in journal (Refereed) Published
Abstract [en]

Context: Automated testing is ubiquitous in modern software development and used to verify requirement conformance on all levels of system abstraction, including the system's graphical user interface (GUI). GUI-based test automation, like other automation, aims to reduce the cost and time for testing compared to alternative, manual approaches. Automation has been successful in reducing costs for other forms of testing (like unit- or integration testing) in industrial practice. However, we have not yet seen the same convincing results for automated GUI-based testing, which has instead been associated with multiple technical challenges. Furthermore, the software industry has struggled with some of these challenges for more than a decade with what seems like only limited progress. Objective: This systematic literature review takes a longitudinal perspective on GUI test automation challenges by identifying them and then investigating why the field has been unable to mitigate them for so many years. Method: The review is based on a final set of 49 publications, all reporting empirical evidence from practice or industrial studies. Statements from the publications are synthesized, based on a thematic coding, into 24 challenges related to GUI test automation. Results: The most reported challenges were mapped chronologically and further analyzed to determine how they and their proposed solutions have evolved over time. This chronological mapping of reported challenges shows that four of them have existed for almost two decades. Conclusion: Based on the analysis, we discuss why the key challenges with GUI-based test automation are still present and why some will likely remain in the future. For others, we discuss possible ways of how the challenges can be addressed. Further research should focus on finding solutions to the identified technical challenges with GUI-based test automation that can be resolved or mitigated. However, in parallel, we also need to acknowledge and try to overcome non-technical challenges. © 2021

Place, publisher, year, edition, pages
Elsevier B.V., 2021
Keywords
GUI testing, System testing, Systematic literature review, Test automation, Automation, Cost reduction, Graphical user interfaces, Software design, Testing, Automated testing, Finding solutions, Graphical user interfaces (GUI), Industrial practices, Software industry, Technical challenges, Integration testing
National Category
Software Engineering
Identifiers
urn:nbn:se:bth-21481 (URN)10.1016/j.infsof.2021.106625 (DOI)000672531500005 ()2-s2.0-85106242621 (Scopus ID)
Funder
Knowledge Foundation
Available from: 2021-06-04 Created: 2021-06-04 Last updated: 2023-11-22Bibliographically approved
5. Augmented testing: Industry feedback to shape a new testing technology
Open this publication in new window or tab >>Augmented testing: Industry feedback to shape a new testing technology
2019 (English)In: Proceedings - 2019 IEEE 12th International Conference on Software Testing, Verification and Validation Workshops, ICSTW 2019, Institute of Electrical and Electronics Engineers Inc. , 2019, p. 176-183Conference paper, Published paper (Refereed)
Abstract [en]

Manual testing is the most commonly used approach in the industry today for acceptance-and system-testing of software applications. Test automation has been suggested to address drawbacks with manual testing but both test automation and manual testing have several challenges that limit their return of investment for system-and acceptance-test automation. Hence, there is still an industrial need for another approach to testing that can mitigate the challenges associated with system-and acceptance-testing and make it more efficient and cost effective for the industry. In this paper we present a novel technique we refer to as Augmented Testing (AT). AT is defined as testing through a visual layer between the tester and the System Under Test (SUT) that superimposes information on top of the GUI. We created a prototype for AT and performed an industrial workshop study with 10 software developers to get their perceived benefits and drawbacks of AT. The benefits and drawbacks will be useful for further development of the technique and prototype for AT. The workshop study identified more benefits than drawbacks with AT. Two of the identified benefits were: 'Know what to test and what has been tested' and 'Less manual work'. Due to these results, we believe that AT is a promising technique that deserves more research since it may provide industry with new benefits that current techniques lack. © 2019 IEEE.

Place, publisher, year, edition, pages
Institute of Electrical and Electronics Engineers Inc., 2019
Series
IEEE International Conference on Software Testing Verification and Validation Workshops
Keywords
Augmented Testing, Industrial Workshop Study, System Testing, Test Automation, Application programs, Automation, Cost effectiveness, Software prototyping, Software testing, System theory, Verification, Acceptance testing, Return of investments, Software applications, Software developer, Testing technology, Acceptance tests
National Category
Software Engineering
Identifiers
urn:nbn:se:bth-18591 (URN)10.1109/ICSTW.2019.00048 (DOI)000477742600024 ()2-s2.0-85068385060 (Scopus ID)9781728108889 (ISBN)
Conference
12th IEEE International Conference on Software Testing, Verification and Validation Workshops, ICSTW, Xi'an, China, 22 April 2019 through 27 April 2019
Available from: 2019-09-09 Created: 2019-09-09 Last updated: 2023-11-22Bibliographically approved
6. Improving Web Element Localization by Using a Large Language Model
Open this publication in new window or tab >>Improving Web Element Localization by Using a Large Language Model
2024 (English)In: Software testing, verification & reliability, ISSN 0960-0833, E-ISSN 1099-1689, Vol. 34, no 7Article in journal (Refereed) Published
Abstract [en]

Web-based test automation heavily relies on accurately finding web elements. Traditional methods compare attributes but don't grasp the context and meaning of elements and words. The emergence of Large Language Models (LLMs) like GPT-4, which can show human-like reasoning abilities on some tasks, offers new opportunities for software engineering and web element localization. This paper introduces and evaluates VON Similo LLM, an enhanced web element localization approach. Using an LLM, it selects the most likely web element from the top-ranked ones identified by the existing VON Similo method, ideally aiming to get closer to human-like selection accuracy. An experimental study was conducted using 804 web element pairs from 48 real-world web applications. We measured the number of correctly identified elements as well as the execution times, comparing the effectiveness and efficiency of VON Similo LLM against the baseline algorithm. In addition, motivations from the LLM were recorded and analyzed for all instances where the original approach failed to find the right web element. VON Similo LLM demonstrated improved performance, reducing failed localizations from 70 to 39 (out of 804), a 44 percent reduction. Despite its slower execution time and additional costs of using the GPT-4 model, the LLMs human-like reasoning showed promise in enhancing web element localization. LLM technology can enhance web element identification in GUI test automation, reducing false positives and potentially lowering maintenance costs. However, further research is necessary to fully understand LLMs capabilities, limitations, and practical use in GUI testing.

Place, publisher, year, edition, pages
John Wiley & Sons, 2024
Keywords
GUI Testing, Test Automation, Test Case Robustness, Web Element Locators, Large Language Models
National Category
Computer Systems
Research subject
Software Engineering
Identifiers
urn:nbn:se:bth-25637 (URN)10.1002/stvr.1893 (DOI)001290853000001 ()2-s2.0-85201296537 (Scopus ID)
Funder
Knowledge Foundation, 20180010
Available from: 2023-11-22 Created: 2023-11-22 Last updated: 2025-01-03Bibliographically approved

Open Access in DiVA

fulltext(5967 kB)692 downloads
File information
File name FULLTEXT01.pdfFile size 5967 kBChecksum SHA-512
50af9cd9c91fa7130e21dc4a3ed1aef700cbd94b845901de7b643aa9c673ccf0a7448b112dabdfc7747d98ea52fe6968c7728fcbd171034f066fd91f62a970f8
Type fulltextMimetype application/pdf

Authority records

Nass, Michel

Search in DiVA

By author/editor
Nass, Michel
By organisation
Department of Software Engineering
Software Engineering

Search outside of DiVA

GoogleGoogle Scholar
Total: 692 downloads
The number of downloads is the sum of all downloads of full texts. It may include eg previous versions that are now no longer available

isbn
urn-nbn

Altmetric score

isbn
urn-nbn
Total: 3538 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf