Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Towards Collaborative GUI-based Testing
Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.ORCID iD: 0000-0002-2916-4020
2023 (English)Licentiate thesis, comprehensive summary (Other academic)
Abstract [en]

Context:Contemporary software development is a socio-technical activity requiring extensive collaboration among individuals with diverse expertise.

Software testing is an integral part of software development that also depends on various expertise.

GUI-based testing allows assessing a system’s GUI and its behavior through its graphical user interface.

Collaborative practices in software development, like code reviews, not only improve software quality but also promote knowledge exchange within teams.

Similar benefits could be extended to other areas of software engineering, such as GUI-based testing.

However, collaborative practices for GUI-based testing necessitate a unique approach since general software development practices, perceivably, can not be directly transferred to software testing.

Goal:This thesis contributes towards a tool-supported approach enabling collaborative GUI-based testing.

Our distinct goals are (1) to identify processes and guidelines to enable collaboration on GUI-based testing artifacts and (2) to operationalize tool support to aid this collaboration.

Method:We conducted a systematic literature review identifying code review guidelines for GUI-based testing.

Further, we conducted a controlled experiment to assess the efficiency and potential usability issues of Augmented Testing.

Results:We provided guidelines for reviewing GUI-based testing artifacts, which aid contributors and reviewers during code reviews.

We further provide empirical evidence that Augmented Testing is not only an efficient approach to GUI-based testing but also usable for non-technical users, making it a promising subject for further research in collaborative GUI-based testing.

Conclusion:Code review guidelines aid collaboration through discussions, and a suitable testing approach can serve as a platform to operationalize collaboration.

Collaborative GUI-based testing has the potential to improve the efficiency and effectiveness of such testing.

Place, publisher, year, edition, pages
Karlskrona, Sweden: Blekinge Tekniska Högskola, 2023.
Series
Blekinge Institute of Technology Licentiate Dissertation Series, ISSN 1650-2140 ; 2023:10
Keywords [en]
software testing, GUI testing, GUI-based testing, collaborative testing, code review
National Category
Software Engineering
Research subject
Systems Engineering
Identifiers
URN: urn:nbn:se:bth-25392ISBN: 978-91-7295-469-4 (print)OAI: oai:DiVA.org:bth-25392DiVA, id: diva2:1797949
Presentation
2023-11-01, J1630 + Zoom, BTH, Karlskrona, 13:00 (English)
Opponent
Supervisors
Part of project
SERT- Software Engineering ReThought, Knowledge FoundationAvailable from: 2023-09-18 Created: 2023-09-18 Last updated: 2023-12-05Bibliographically approved
List of papers
1. Augmented Testing to support Manual GUI-based Regression Testing: An Empirical Study
Open this publication in new window or tab >>Augmented Testing to support Manual GUI-based Regression Testing: An Empirical Study
2024 (English)In: Empirical Software Engineering, ISSN 1382-3256, E-ISSN 1573-7616, Vol. 29, no 6, article id 140Article in journal (Refereed) Published
Abstract [en]

Context: Manual graphical user interface (GUI) software testing presents a substantial part of the overall practiced testing efforts, despite various research efforts to further increase test automation. Augmented Testing (AT), a novel approach for GUI testing, aims to aid manual GUI-based testing through a tool-supported approach where an intermediary visual layer is rendered between the system under test (SUT) and the tester, superimposing relevant test information.

Objective: The primary objective of this study is to gather empirical evidence regarding AT's efficiency compared to manual GUI-based regression testing. Existing studies involving testing approaches under the AT definition primarily focus on exploratory GUI testing, leaving a gap in the context of regression testing. As a secondary objective, we investigate AT's benefits, drawbacks, and usability issues when deployed with the demonstrator tool, Scout.

Method: We conducted an experiment involving 13 industry professionals, from six companies, comparing AT to manual GUI-based regression testing. These results were complemented by interviews and Bayesian data analysis (BDA) of the study's quantitative results.

Results: The results of the Bayesian data analysis revealed that the use of AT shortens test durations in 70% of the cases on average, concluding that AT is more efficient.When comparing the means of the total duration to perform all tests, AT reduced the test duration by 36% in total. Participant interviews highlighted nine benefits and eleven drawbacks of AT, while observations revealed four usability issues.

Conclusion: This study makes an empirical contribution to understanding Augmented Testing, a promising approach to improve the efficiency of GUI-based regression testing in practice. Furthermore, it underscores the importance of continual refinements of AT.

Place, publisher, year, edition, pages
Springer, 2024
Keywords
GUI-based testing, GUI testing, Augmented Testing, manual teting, Bayesian data analysis
National Category
Software Engineering
Research subject
Systems Engineering
Identifiers
urn:nbn:se:bth-25391 (URN)10.1007/s10664-024-10522-z (DOI)001292331700002 ()2-s2.0-85201391671 (Scopus ID)
Funder
Knowledge Foundation, 20180010
Available from: 2023-09-18 Created: 2023-09-18 Last updated: 2024-08-30Bibliographically approved
2. We Tried and Failed: An Experience Report on a Collaborative Workflow for GUI-based Testing
Open this publication in new window or tab >>We Tried and Failed: An Experience Report on a Collaborative Workflow for GUI-based Testing
2023 (English)In: Proceedings - 2023 IEEE 16th International Conference on Software Testing, Verification and Validation Workshops, ICSTW, Institute of Electrical and Electronics Engineers (IEEE), 2023, p. 1-9Conference paper, Published paper (Refereed)
Abstract [en]

Modern software development is a team-based effort supported by tools, processes, and practices. One integral part is automated testing, where developers incorporate automated tests on multiple levels of system abstraction, from low-level unit tests to high-level system tests and Graphical User Interface (GUI) tests. Furthermore, the common practices of code reviews allow collaboration on artifacts based on discussions that improve the artifact's quality and to share information within the team. However, the characteristics of GUI-based tests, due to the level of abstraction and visual elements, introduce additional requirements and complexities compared to code or lower-level test code review, delimiting the practice benefits.The objective of this work is to propose a tool-supported workflow that enables active collaboration among stakeholders and improves the efficiency and effectiveness of team-based development of GUI-based tests.To evaluate the workflow, and show proof of concept, a technical demonstrator for merging of GUI-based tests was to be developed. However, during its development, we encountered several unforeseen challenges that forced us to halt its development. We report the negative results from this development and the main challenges we encountered, as well as the rationale and the decisions we took towards this workflow.In conclusion, this work presents a negative research result on a failed attempt to propose a tool-supported workflow that enables active collaboration on GUI-based tests. The outcome and learnings of this work are intended to guide future research and prevent researchers from falling into the same pitfalls we did. © 2023 IEEE.

Place, publisher, year, edition, pages
Institute of Electrical and Electronics Engineers (IEEE), 2023
Series
IEEE International Conference on Software Testing Verification and Validation Workshop, ICSTW, ISSN 2159-4848
Keywords
automated testing, collaborative testing, collaborative workflow, GUI testing, model-based testing, Abstracting, Automation, Model checking, Software design, Software testing, Code review, Experience report, Graphical user interface testing, Integral part, Interface testings, Model based testing, Work-flows, Graphical user interfaces
National Category
Software Engineering
Identifiers
urn:nbn:se:bth-25204 (URN)10.1109/ICSTW58534.2023.00015 (DOI)001009223100001 ()2-s2.0-85163117973 (Scopus ID)9798350333350 (ISBN)
Conference
16th IEEE International Conference on Software Testing, Verification and Validation Workshops, ICSTW 2023, Dublin, 16 April through 20 April 2023
Funder
Knowledge Foundation, 20180010
Available from: 2023-08-06 Created: 2023-08-06 Last updated: 2023-09-21Bibliographically approved
3. Code review guidelines for GUI-based testing artifacts
Open this publication in new window or tab >>Code review guidelines for GUI-based testing artifacts
2023 (English)In: Information and Software Technology, ISSN 0950-5849, E-ISSN 1873-6025, Vol. 163, article id 107299Article, review/survey (Refereed) Published
Abstract [en]

Context: Review of software artifacts, such as source or test code, is a common practice in industrial practice. However, although review guidelines are available for source and low-level test code, for GUI-based testing artifacts, such guidelines are missing. Objective: The goal of this work is to define a set of guidelines from literature about production and test code, that can be mapped to GUI-based testing artifacts. Method: A systematic literature review is conducted, using white and gray literature to identify guidelines for source and test code. These synthesized guidelines are then mapped, through examples, to create actionable, and applicable, guidelines for GUI-based testing artifacts. Results: The results of the study are 33 guidelines, summarized in nine guideline categories, that are successfully mapped as applicable to GUI-based testing artifacts. Of the collected literature, only 10 sources contained test-specific code review guidelines. These guideline categories are: perform automated checks, use checklists, provide context information, utilize metrics, ensure readability, visualize changes, reduce complexity, check conformity with the requirements and follow design principles and patterns. Conclusion: This pivotal set of guidelines provides an industrial contribution in filling the gap of general guidelines for review of GUI-based testing artifacts. Additionally, this work highlights, from an academic perspective, the need for future research in this area to also develop guidelines for other specific aspects of GUI-based testing practice, and to take into account other facets of the review process not covered by this work, such as reviewer selection. © 2023 The Author(s)

Place, publisher, year, edition, pages
Elsevier, 2023
Keywords
Code review, GUI testing, GUI-based testing, Guidelines, Modern code review, Practices, Software testing, Graphical user interfaces, Guideline, Practice, Software testings, Source codes, Test code
National Category
Software Engineering
Identifiers
urn:nbn:se:bth-25235 (URN)10.1016/j.infsof.2023.107299 (DOI)001051358500001 ()2-s2.0-85165535690 (Scopus ID)
Funder
Knowledge Foundation, 20180010
Available from: 2023-08-08 Created: 2023-08-08 Last updated: 2023-09-18Bibliographically approved

Open Access in DiVA

fulltext(4274 kB)128 downloads
File information
File name FULLTEXT01.pdfFile size 4274 kBChecksum SHA-512
27e8e2d1c5bc90624fbecd42754e16c5b6287a89b60f015a9c4ab638c87636617ef5f61e8a041b196166966baf07cdb209ecea7533910a768c44d94d1761bab1
Type fulltextMimetype application/pdf

Authority records

Bauer, Andreas

Search in DiVA

By author/editor
Bauer, Andreas
By organisation
Department of Software Engineering
Software Engineering

Search outside of DiVA

GoogleGoogle Scholar
Total: 128 downloads
The number of downloads is the sum of all downloads of full texts. It may include eg previous versions that are now no longer available

isbn
urn-nbn

Altmetric score

isbn
urn-nbn
Total: 751 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf