Change search
ExportLink to record
Permanent link

Direct link
BETA

Project

Project type/Form of grant
Grant to research environment
Title [en]
SERT- Software Engineering ReThought
Abstract [en]
SERT – Software Engineering ReThought is a groundbreaking research project with the aim to take on the next generation challenges facing companies developing software intensive systems and products. We as an engineering lab are blazing the road introducing 3:rd generation empirical software engineering – denoting close co-production of pragmatic problem solving in close collaboration with our industrial partners as we perform engineering research into topics critical for engineering and business success. SERTs formulation of 3:rd generation empirical software engineering will utilize related knowledge areas as catalysts to solve challenges. Value-based engineering, Data-driven evidence based engineering, and Human-based development will complement software engineering competence in an integrated eco-system of competence focused on the challenges at hand.All areas in software engineering, ranging from inception, realization to evolution are part of the research venture – reflecting that companies need solutions covering their entire ecosystem.
Publications (10 of 128) Show all publications
Peixoto, M., Gorschek, T., Mendez, D., Fucci, D. & Silva, C. (2024). A natural language-based method to specify privacy requirements: an evaluation with practitioners. Requirements Engineering, 29(3), 279-301
Open this publication in new window or tab >>A natural language-based method to specify privacy requirements: an evaluation with practitioners
Show others...
2024 (English)In: Requirements Engineering, ISSN 0947-3602, E-ISSN 1432-010X, Vol. 29, no 3, p. 279-301Article in journal (Refereed) Published
Abstract [en]

Organisations are becoming concerned with effectively dealing with privacy-related requirements. Existing Requirements Engineering methods based on structured natural language suffer from several limitations both in eliciting and specifying privacy requirements. In our previous study, we proposed a structured natural-language approach called the “Privacy Criteria Method” (PCM), which demonstrates potential advantages over user stories. Our goal is to present a PCM evaluation that focused on the opinions of software practitioners from different companies on PCM’s ability to support the specification of privacy requirements and the quality of the privacy requirements specifications produced by these software practitioners. We conducted a multiple case study to evaluate PCM in four different industrial contexts. We gathered and analysed the opinions of 21 practitioners on PCM usage regarding Coverage, Applicability, Usefulness, and Scalability. Moreover, we assessed the syntactic and semantic quality of the PCM artifacts produced by these practitioners. PCM can aid developers in elaborating requirements specifications focused on privacy with good quality. The practitioners found PCM to be useful for their companies’ development processes. PCM is considered a promising method for specifying privacy requirements. Some slight extensions of PCM may be required to tailor the method to the characteristics of the company. © The Author(s), under exclusive licence to Springer-Verlag London Ltd., part of Springer Nature 2024.

Place, publisher, year, edition, pages
Springer Science+Business Media B.V., 2024
Keywords
Empirical study, Privacy criteria method, Privacy requirements specification, Software development, Quality control, Requirements engineering, Semantics, Software design, Empirical studies, Engineering methods, Natural languages, Privacy requirement specification, Privacy requirements, Requirement engineering, Requirements specifications, Software practitioners, User stories, Specifications
National Category
Software Engineering
Identifiers
urn:nbn:se:bth-26772 (URN)10.1007/s00766-024-00428-z (DOI)001272283700001 ()2-s2.0-85198939572 (Scopus ID)
Funder
Knowledge Foundation, 20180010
Available from: 2024-08-09 Created: 2024-08-09 Last updated: 2024-09-19Bibliographically approved
Frattini, J., Fucci, D., Torkar, R. & Mendez, D. (2024). A Second Look at the Impact of Passive Voice Requirements on Domain Modeling: Bayesian Reanalysis of an Experiment. In: Proceedings of the 2024 IEEE/ACM international workshop on methodological issues with empirical studies in software engineering, WSESE 2024: . Paper presented at International Workshop on Methodological Issues with Empirical Studies in Software Engineering (WSESE), Lisbon, APR 16, 2024 (pp. 27-33). Association for Computing Machinery (ACM)
Open this publication in new window or tab >>A Second Look at the Impact of Passive Voice Requirements on Domain Modeling: Bayesian Reanalysis of an Experiment
2024 (English)In: Proceedings of the 2024 IEEE/ACM international workshop on methodological issues with empirical studies in software engineering, WSESE 2024, Association for Computing Machinery (ACM), 2024, p. 27-33Conference paper, Published paper (Refereed)
Abstract [en]

The quality of requirements specifications may impact subsequent, dependent software engineering (SE) activities. However, empirical evidence of this impact remains scarce and too often superficial as studies abstract from the phenomena under investigation too much. 1Wo of these abstractions are caused by the lack of frameworks for causal inference and frequentist methods which reduce complex data to binary results. In this study, we aim to demonstrate (1) the use of a causal framework and (2) contrast frequentist methods with more sophisticated Bayesian statistics for causal inference. To this end, we reanalyze the only known controlled experiment investigating the impact of passive voice on the subsequent activity of domain modeling. We follow a framework for statistical causal inference and employ Bayesian data analysis methods to re-investigate the hypotheses of the original study. Our results reveal that the effects observed by the original authors turned out to be much less significant than previously assumed. This study supports the recent call to action in SE research to adopt Bayesian data analysis, including causal frameworks and Bayesian statistics, for more sophisticated causal inference.

Place, publisher, year, edition, pages
Association for Computing Machinery (ACM), 2024
Keywords
Requirements Engineering, Requirements Quality, Controlled experiment, Bayesian Data Analysis
National Category
Software Engineering Probability Theory and Statistics
Identifiers
urn:nbn:se:bth-26968 (URN)10.1145/3643664.3618211 (DOI)001293147200006 ()9798400705670 (ISBN)
Conference
International Workshop on Methodological Issues with Empirical Studies in Software Engineering (WSESE), Lisbon, APR 16, 2024
Funder
Knowledge Foundation, 20180010
Available from: 2024-10-03 Created: 2024-10-03 Last updated: 2024-10-03Bibliographically approved
Jedrzejewski, F., Thode, L., Fischbach, J., Gorschek, T., Mendez, D. & Lavesson, N. (2024). Adversarial Machine Learning in Industry: A Systematic Literature Review. Computers & security (Print), 145, Article ID 103988.
Open this publication in new window or tab >>Adversarial Machine Learning in Industry: A Systematic Literature Review
Show others...
2024 (English)In: Computers & security (Print), ISSN 0167-4048, E-ISSN 1872-6208, Vol. 145, article id 103988Article, review/survey (Refereed) Published
Abstract [en]

Adversarial Machine Learning (AML) discusses the act of attacking and defending Machine Learning (ML) Models, an essential building block of Artificial Intelligence (AI). ML is applied in many software-intensive products and services and introduces new opportunities and security challenges. AI and ML will gain even more attention from the industry in the future, but threats caused by already-discovered attacks specifically targeting ML models are either overseen, ignored, or mishandled. Current AML research investigates attack and defense scenarios for ML in different industrial settings with a varying degree of maturity with regard to academic rigor and practical relevance. However, to the best of our knowledge, a synthesis of the state of academic rigor and practical relevance is missing. This literature study reviews studies in the area of AML in the context of industry, measuring and analyzing each study's rigor and relevance scores. Overall, all studies scored a high rigor score and a low relevance score, indicating that the studies are thoroughly designed and documented but miss the opportunity to include touch points relatable for practitioners. © 2024 The Author(s)

Place, publisher, year, edition, pages
Elsevier, 2024
Keywords
Adversarial machine learning, Industry, Relevance, Rigor, State of evidence, Industrial research, Building blockes, Machine learning models, Machine-learning, Product and services, Relevance score, Systematic literature review, Machine learning
National Category
Software Engineering
Identifiers
urn:nbn:se:bth-26820 (URN)10.1016/j.cose.2024.103988 (DOI)001290393300001 ()2-s2.0-85200501059 (Scopus ID)
Funder
Knowledge Foundation, 20180010
Available from: 2024-08-16 Created: 2024-08-16 Last updated: 2024-08-23Bibliographically approved
Bauer, A., Frattini, J. & Alégroth, E. (2024). Augmented Testing to support Manual GUI-based Regression Testing: An Empirical Study. Empirical Software Engineering, 29(6), Article ID 140.
Open this publication in new window or tab >>Augmented Testing to support Manual GUI-based Regression Testing: An Empirical Study
2024 (English)In: Empirical Software Engineering, ISSN 1382-3256, E-ISSN 1573-7616, Vol. 29, no 6, article id 140Article in journal (Refereed) Published
Abstract [en]

Context: Manual graphical user interface (GUI) software testing presents a substantial part of the overall practiced testing efforts, despite various research efforts to further increase test automation. Augmented Testing (AT), a novel approach for GUI testing, aims to aid manual GUI-based testing through a tool-supported approach where an intermediary visual layer is rendered between the system under test (SUT) and the tester, superimposing relevant test information.

Objective: The primary objective of this study is to gather empirical evidence regarding AT's efficiency compared to manual GUI-based regression testing. Existing studies involving testing approaches under the AT definition primarily focus on exploratory GUI testing, leaving a gap in the context of regression testing. As a secondary objective, we investigate AT's benefits, drawbacks, and usability issues when deployed with the demonstrator tool, Scout.

Method: We conducted an experiment involving 13 industry professionals, from six companies, comparing AT to manual GUI-based regression testing. These results were complemented by interviews and Bayesian data analysis (BDA) of the study's quantitative results.

Results: The results of the Bayesian data analysis revealed that the use of AT shortens test durations in 70% of the cases on average, concluding that AT is more efficient.When comparing the means of the total duration to perform all tests, AT reduced the test duration by 36% in total. Participant interviews highlighted nine benefits and eleven drawbacks of AT, while observations revealed four usability issues.

Conclusion: This study makes an empirical contribution to understanding Augmented Testing, a promising approach to improve the efficiency of GUI-based regression testing in practice. Furthermore, it underscores the importance of continual refinements of AT.

Place, publisher, year, edition, pages
Springer, 2024
Keywords
GUI-based testing, GUI testing, Augmented Testing, manual teting, Bayesian data analysis
National Category
Software Engineering
Research subject
Systems Engineering
Identifiers
urn:nbn:se:bth-25391 (URN)10.1007/s10664-024-10522-z (DOI)001292331700002 ()2-s2.0-85201391671 (Scopus ID)
Funder
Knowledge Foundation, 20180010
Available from: 2023-09-18 Created: 2023-09-18 Last updated: 2024-08-30Bibliographically approved
Elahidoost, P., Unterkalmsteiner, M., Fucci, D., Liljenberg, P. & Fischbach, J. (2024). Designing NLP-Based Solutions for Requirements Variability Management: Experiences from a Design Science Study at Visma. In: Daniel Mendez, Ana Moreira (Ed.), Requirements Engineering: Foundation for Software Qualit. Paper presented at 30th International Working Conference on Requirements Engineering: Foundation for Software Quality, REFSQ 2024, Winterthur 8 April through 12 April 2024 (pp. 191-204). Springer Science+Business Media B.V.
Open this publication in new window or tab >>Designing NLP-Based Solutions for Requirements Variability Management: Experiences from a Design Science Study at Visma
Show others...
2024 (English)In: Requirements Engineering: Foundation for Software Qualit / [ed] Daniel Mendez, Ana Moreira, Springer Science+Business Media B.V., 2024, p. 191-204Conference paper, Published paper (Refereed)
Abstract [en]

Context and motivation: In this industry-academia collaborative project, a team of researchers, supported by a software architect, business analyst, and test engineer explored the challenges of requirement variability in a large business software development company. Question/ problem: Following the design science paradigm, we studied the problem of requirements analysis and tracing in the context of contractual documents, with a specific focus on managing requirements variability. This paper reports on the lessons learned from that experience, highlighting the strategies and insights gained in the realm of requirements variability management.Principal ideas/results: This experience report outlines the insights gained from applying design science in requirements engineering research in industry. We show and evaluate various strategies to tackle the issue of requirement variability. Contribution: We report on the iterations and how the solution development evolved in parallel with problem understanding. From this process, we derive five key lessons learned to highlight the effectiveness of design science in exploring solutions for requirement variability in contract-based environments. © The Author(s), under exclusive license to Springer Nature Switzerland AG 2024.

Place, publisher, year, edition, pages
Springer Science+Business Media B.V., 2024
Series
Lecture Notes in Computer Science (LNCS), ISSN 0302-9743, E-ISSN 1611-3349 ; 14588
Keywords
Industry-academia collaboration, Lessons learned, Requirements variability management, Computer software selection and evaluation, Design, Industrial research, Project management, Software architecture, Software design, Software testing, Business analysts, Collaborative programs, Design science, Lesson learned, Requirement variability management, Requirements variability, Science studies, Software architects, Variability management, Requirements engineering
National Category
Software Engineering
Identifiers
urn:nbn:se:bth-26155 (URN)10.1007/978-3-031-57327-9_12 (DOI)001209314200012 ()2-s2.0-85190698479 (Scopus ID)9783031573262 (ISBN)
Conference
30th International Working Conference on Requirements Engineering: Foundation for Software Quality, REFSQ 2024, Winterthur 8 April through 12 April 2024
Funder
Knowledge Foundation, 20180010
Available from: 2024-04-30 Created: 2024-04-30 Last updated: 2024-05-30Bibliographically approved
Fucci, D., Alégroth, E., Felderer, M. & Johannesson, C. (2024). Evaluating software security maturity using OWASP SAMM: Different approaches and stakeholders perceptions. Journal of Systems and Software, 214, Article ID 112062.
Open this publication in new window or tab >>Evaluating software security maturity using OWASP SAMM: Different approaches and stakeholders perceptions
2024 (English)In: Journal of Systems and Software, ISSN 0164-1212, E-ISSN 1873-1228, Vol. 214, article id 112062Article in journal (Refereed) Published
Abstract [en]

Background: Recent years have seen a surge in cyber-attacks, which can be prevented or mitigated using software security activities. OWASP SAMM is a maturity model providing a versatile way for companies to assess their security posture and plan for improvements. Objective: We perform an initial SAMM assessment in collaboration with a company in the financial domain. Our objective is to assess a holistic inventory of the company security-related activities, focusing on how different roles perform the assessment and how they perceive the instrument used in the process. Methodology: We perform a case study to collect data using SAMM in a lightweight and novel manner through assessment using an online survey with 17 participants and a focus group with seven participants. Results: We show that different roles perceive maturity differently and that the two assessments deviate only for specific practices making the lightweight approach a viable and efficient solution in industrial practice. Our results indicate that the questions included in the SAMM assessment tool are answered easily and confidently across most roles. Discussion: Our results suggest that companies can productively use a lightweight SAMM assessment. We provide nine lessons learned for guiding industrial practitioners in the evaluation of their current security posture as well as for academics wanting to utilize SAMM as a research tool in industrial settings. Editor's note: Open Science material was validated by the Journal of Systems and Software Open Science Board. © 2024 The Author(s)

Place, publisher, year, edition, pages
Elsevier, 2024
Keywords
Industry-academia collaboration, OWASP SAMM, Software security, Cybersecurity, Industrial research, Petroleum reservoir evaluation, Cyber-attacks, Evaluating software, Financial domains, Maturity model, Open science, Security activities, Stakeholder perception, Network security
National Category
Software Engineering
Identifiers
urn:nbn:se:bth-26188 (URN)10.1016/j.jss.2024.112062 (DOI)001237888500001 ()2-s2.0-85192019707 (Scopus ID)
Funder
Knowledge Foundation, 20180010
Available from: 2024-05-13 Created: 2024-05-13 Last updated: 2024-06-19Bibliographically approved
Fucci, D. (2024). FAIR enough: A Vision for Research Objects in Empirical Software Engineering Studies. In: Proceedings - 2024 IEEE/ACM International Workshop on Methodological Issues with Empirical Studies in Software Engineering, WSESE 2024: . Paper presented at 1st International Workshop on Methodological Issues with Empirical Studies in Software Engineering, WSESE 2024, Lisbon, April 16, 2024 (pp. 64-67). Association for Computing Machinery (ACM)
Open this publication in new window or tab >>FAIR enough: A Vision for Research Objects in Empirical Software Engineering Studies
2024 (English)In: Proceedings - 2024 IEEE/ACM International Workshop on Methodological Issues with Empirical Studies in Software Engineering, WSESE 2024, Association for Computing Machinery (ACM), 2024, p. 64-67Conference paper, Published paper (Refereed)
Abstract [en]

In recent years, the software engineering research community has been fostering Open Science through several initiatives. Although the transparency fostered in Open Science can address some of the concerns related to appropriate study design and data analysis methods, the community still needs to fully embrace a set of guidelines for managing research data, such as FAIR principles. A fundamental aspect of FAIR is the Research Object - -i.e., a bundle of research artifacts and their metadata. In this paper, I show an example of a Research Object and discuss how formalized metadata can benefit the community. © 2024 Copyright is held by the owner/author(s). Publication rights licensed to ACM.

Place, publisher, year, edition, pages
Association for Computing Machinery (ACM), 2024
Keywords
empirical studies, FAIR, metadata, ontology, software engineering research, Design analysis method, Empirical Software Engineering, Ontology's, Open science, Research communities, Research object, Study design, Data consistency
National Category
Software Engineering
Identifiers
urn:nbn:se:bth-26914 (URN)10.1145/3643664.3648201 (DOI)001293147200012 ()2-s2.0-85203103938 (Scopus ID)9798400705670 (ISBN)
Conference
1st International Workshop on Methodological Issues with Empirical Studies in Software Engineering, WSESE 2024, Lisbon, April 16, 2024
Funder
Knowledge Foundation, 20180010
Available from: 2024-09-16 Created: 2024-09-16 Last updated: 2024-10-03Bibliographically approved
Unterkalmsteiner, M., Badampudi, D., Britto, R. & Ali, N. b. (2024). Help Me to Understand this Commit! - A Vision for Contextualized Code Reviews. In: Proceedings - 2024 1st IDE Workshop, IDE 2024: . Paper presented at 1st Integrated Development Environments Workshop, IDE 2024, Lisbon, April 20 2024 (pp. 18-23). Association for Computing Machinery (ACM)
Open this publication in new window or tab >>Help Me to Understand this Commit! - A Vision for Contextualized Code Reviews
2024 (English)In: Proceedings - 2024 1st IDE Workshop, IDE 2024, Association for Computing Machinery (ACM), 2024, p. 18-23Conference paper, Published paper (Refereed)
Abstract [en]

Background: Modern Code Review (MCR) is a key component for delivering high-quality software and sharing knowledge among developers. Effective reviews require an in-depth understanding of the code and demand from the reviewers to contextualize the change from different perspectives.

Aim: While there is a plethora of research on solutions that support developers to understand changed code, we have observed that many provide only narrow, specialized insights and very few aggregate information in a meaningful manner. Therefore, we aim to provide a vision of improving code understanding in MCR.

Method: We classified 53 research papers suggesting proposals to improve MCR code understanding. We use this classification, the needs expressed by code reviewers from previous research, and the information we have not found in the literature for extrapolation.

Results: We identified four major types of support systems and suggest an environment for contextualized code reviews. Furthermore, we illustrate with a set of scenarios how such an environment would improve the effectiveness of code reviews.

Conclusions: Current research focuses mostly on providing narrow support for developers. We outline a vision for how MCR can be improved by using context and reducing the cognitive load on developers. We hope our vision can foster future advancements in development environments. 

Place, publisher, year, edition, pages
Association for Computing Machinery (ACM), 2024
Keywords
code understanding, decision-making, modern code reviews, support systems, Reviews, Classifieds, Code review, Contextualize, Decisions makings, High-quality software, In-depth understanding, Modern code review, Sharing knowledge, Decision making
National Category
Software Engineering
Identifiers
urn:nbn:se:bth-26892 (URN)10.1145/3643796.3648447 (DOI)001297920700005 ()2-s2.0-85202436597 (Scopus ID)9798400705809 (ISBN)
Conference
1st Integrated Development Environments Workshop, IDE 2024, Lisbon, April 20 2024
Funder
ELLIIT - The Linköping‐Lund Initiative on IT and Mobile CommunicationsKnowledge Foundation, 20220235Knowledge Foundation, 20180010
Available from: 2024-09-10 Created: 2024-09-10 Last updated: 2024-10-04Bibliographically approved
Frattini, J. (2024). Identifying Relevant Factors of Requirements Quality: An Industrial Case Study. In: Daniel Mendez, Ana Moreira (Ed.), Requirements Engineering: Foundation for Software Quality. Paper presented at 30th International Working Conference on Requirements Engineering: Foundation for Software Quality, REFSQ 2024, Winterthur, 8 April through 12 April 2024 (pp. 20-36). Springer Science+Business Media B.V.
Open this publication in new window or tab >>Identifying Relevant Factors of Requirements Quality: An Industrial Case Study
2024 (English)In: Requirements Engineering: Foundation for Software Quality / [ed] Daniel Mendez, Ana Moreira, Springer Science+Business Media B.V., 2024, p. 20-36Conference paper, Published paper (Refereed)
Abstract [en]

[Context and Motivation]: The quality of requirements specifications impacts subsequent, dependent software engineering activities. Requirements quality defects like ambiguous statements can result in incomplete or wrong features and even lead to budget overrun or project failure. [Problem]: Attempts at measuring the impact of requirements quality have been held back by the vast amount of interacting factors. Requirements quality research lacks an understanding of which factors are relevant in practice. [Principal Ideas and Results]: We conduct a case study considering data from both interview transcripts and issue reports to identify relevant factors of requirements quality. The results include 17 factors and 11 interaction effects relevant to the case company. [Contribution]: The results contribute empirical evidence that (1) strengthens existing requirements engineering theories and (2) advances industry-relevant requirements quality research. © The Author(s), under exclusive license to Springer Nature Switzerland AG 2024.

Place, publisher, year, edition, pages
Springer Science+Business Media B.V., 2024
Series
Lecture Notes in Computer Science, ISSN 0302-9743, E-ISSN 1611-3349 ; 14588
Keywords
Case study, Interview, Requirements quality, Budget control, Software engineering, Budget overruns, Case-studies, Engineering activities, Industrial case study, Interaction effect, Project failures, Quality defects, Requirement quality, Requirements specifications, Requirements engineering
National Category
Software Engineering
Identifiers
urn:nbn:se:bth-26153 (URN)10.1007/978-3-031-57327-9_2 (DOI)001209314200002 ()2-s2.0-85190670743 (Scopus ID)9783031573262 (ISBN)
Conference
30th International Working Conference on Requirements Engineering: Foundation for Software Quality, REFSQ 2024, Winterthur, 8 April through 12 April 2024
Funder
Knowledge Foundation, 20180010
Available from: 2024-04-30 Created: 2024-04-30 Last updated: 2024-05-30Bibliographically approved
Nass, M., Alégroth, E. & Feldt, R. (2024). Improving Web Element Localization by Using a Large Language Model. Software testing, verification & reliability
Open this publication in new window or tab >>Improving Web Element Localization by Using a Large Language Model
2024 (English)In: Software testing, verification & reliability, ISSN 0960-0833, E-ISSN 1099-1689Article in journal (Refereed) Epub ahead of print
Abstract [en]

Web-based test automation heavily relies on accurately finding web elements. Traditional methods compare attributes but don't grasp the context and meaning of elements and words. The emergence of Large Language Models (LLMs) like GPT-4, which can show human-like reasoning abilities on some tasks, offers new opportunities for software engineering and web element localization. This paper introduces and evaluates VON Similo LLM, an enhanced web element localization approach. Using an LLM, it selects the most likely web element from the top-ranked ones identified by the existing VON Similo method, ideally aiming to get closer to human-like selection accuracy. An experimental study was conducted using 804 web element pairs from 48 real-world web applications. We measured the number of correctly identified elements as well as the execution times, comparing the effectiveness and efficiency of VON Similo LLM against the baseline algorithm. In addition, motivations from the LLM were recorded and analyzed for all instances where the original approach failed to find the right web element. VON Similo LLM demonstrated improved performance, reducing failed localizations from 70 to 39 (out of 804), a 44 percent reduction. Despite its slower execution time and additional costs of using the GPT-4 model, the LLMs human-like reasoning showed promise in enhancing web element localization. LLM technology can enhance web element identification in GUI test automation, reducing false positives and potentially lowering maintenance costs. However, further research is necessary to fully understand LLMs capabilities, limitations, and practical use in GUI testing.

Place, publisher, year, edition, pages
John Wiley & Sons, 2024
Keywords
GUI Testing, Test Automation, Test Case Robustness, Web Element Locators, Large Language Models
National Category
Computer Systems
Research subject
Software Engineering
Identifiers
urn:nbn:se:bth-25637 (URN)10.1002/stvr.1893 (DOI)001290853000001 ()2-s2.0-85201296537 (Scopus ID)
Funder
Knowledge Foundation, 20180010
Available from: 2023-11-22 Created: 2023-11-22 Last updated: 2024-09-10Bibliographically approved
Principal InvestigatorGorschek, Tony
Coordinating organisation
Blekinge Institute of Technology
Funder
Period
2018-09-01 - 2026-09-01
National Category
Software Engineering
Identifiers
DiVA, id: project:2307Project, id: 20180010

Search in DiVA

Software Engineering

Search outside of DiVA

GoogleGoogle Scholar

Link to external project page

SERT Web