Change search
Link to record
Permanent link

Direct link
Unterkalmsteiner, MichaelORCID iD iconorcid.org/0000-0003-4118-0952
Publications (10 of 63) Show all publications
Unterkalmsteiner, M. & Abdeen, W. (2023). A compendium and evaluation of taxonomy quality attributes. Expert systems (Print), 40(1), Article ID e13098.
Open this publication in new window or tab >>A compendium and evaluation of taxonomy quality attributes
2023 (English)In: Expert systems (Print), ISSN 0266-4720, E-ISSN 1468-0394, Vol. 40, no 1, article id e13098Article in journal (Refereed) Published
Abstract [en]

Introduction: Taxonomies capture knowledge about a particular domain in a succinct manner and establish a common understanding among peers. Researchers use taxonomies to convey information about a particular knowledge area or to support automation tasks, and practitioners use them to enable communication beyond organizational boundaries. Aims: Despite this important role of taxonomies in software engineering, their quality is seldom evaluated. Our aim is to identify and define taxonomy quality attributes that provide practical measurements, helping researchers and practitioners to compare taxonomies and choose the one most adequate for the task at hand. Methods: We reviewed 324 publications from software engineering and information systems research and synthesized, when provided, the definitions of quality attributes and measurements. We evaluated the usefulness of the measurements on six taxonomies from three domains. Results: We propose the definition of seven quality attributes and suggest internal and external measurements that can be used to assess a taxonomy’s quality. For two measurements we provide implementations in Python. We found the measurements useful for deciding which taxonomy is best suited for a particular purpose. Conclusion: While there exist several guidelines for creating taxonomies, there is a lack of actionable criteria to compare taxonomies. In this paper, we fill this gap by synthesizing from a wealth of literature seven, non‐overlapping taxonomy quality attributes and corresponding measurements. Future work encompasses their further evaluation of usefulness and empirical validation.

Place, publisher, year, edition, pages
John Wiley & Sons, 2023
Keywords
evaluation, measurements, quality attributes, taxonomy
National Category
Software Engineering
Identifiers
urn:nbn:se:bth-23499 (URN)10.1111/exsy.13098 (DOI)000822883400001 ()2-s2.0-85133700489 (Scopus ID)
Funder
Swedish Transport Administration, D-CATKnowledge Foundation, 20180010
Available from: 2022-08-08 Created: 2022-08-08 Last updated: 2023-02-08Bibliographically approved
Abdeen, W., Chen, X. & Unterkalmsteiner, M. (2023). An approach for performance requirements verification and test environments generation. Requirements Engineering, 28(1), 117-144
Open this publication in new window or tab >>An approach for performance requirements verification and test environments generation
2023 (English)In: Requirements Engineering, ISSN 0947-3602, E-ISSN 1432-010X, Vol. 28, no 1, p. 117-144Article in journal (Refereed) Published
Abstract [en]

Model-based testing (MBT) is a method that supports the design and execution of test cases by models that specify theintended behaviors of a system under test. While systematic literature reviews on MBT in general exist, the state of the arton modeling and testing performance requirements has seen much less attention. Therefore, we conducted a systematic map-ping study on model-based performance testing. Then, we studied natural language software requirements specificationsin order to understand which and how performance requirements are typically specified. Since none of the identified MBTtechniques supported a major benefit of modeling, namely identifying faults in requirements specifications, we developed thePerformance Requirements verificatiOn and Test EnvironmentS generaTion approach (PRO-TEST). Finally, we evaluatedPRO-TEST on 149 requirements specifications. We found and analyzed 57 primary studies from the systematic mappingstudy and extracted 50 performance requirements models. However, those models don’t achieve the goals of MBT, whichare validating requirements, ensuring their testability, and generating the minimum required test cases. We analyzed 77 Soft-ware Requirements Specification (SRS) documents, extracted 149 performance requirements from those SRS, and illustratethat with PRO-TEST we can model performance requirements, find issues in those requirements and detect missing ones.We detected three not-quantifiable requirements, 43 not-quantified requirements, and 180 underspecified parameters in the149 modeled performance requirements. Furthermore, we generated 96 test environments from those models. By modelingperformance requirements with PRO-TEST, we can identify issues in the requirements related to their ambiguity, measur-ability, and completeness. Additionally, it allows to generate parameters for test environments

Place, publisher, year, edition, pages
Springer, 2023
Keywords
Model-based testing, Performance requirements modeling, Performance aspects, Natural language requirements
National Category
Software Engineering Computer Systems
Identifiers
urn:nbn:se:bth-22848 (URN)10.1007/s00766-022-00379-3 (DOI)000782347800001 ()2-s2.0-85128212480 (Scopus ID)
Funder
Swedish Transport Administration, DCAT project
Note

open access

Available from: 2022-04-21 Created: 2022-04-21 Last updated: 2023-06-19Bibliographically approved
Fischbach, J., Frattini, J., Vogelsang, A., Mendez, D., Unterkalmsteiner, M., Wehrle, A., . . . Wiecher, C. (2023). Automatic creation of acceptance tests by extracting conditionals from requirements: NLP approach and case study. Journal of Systems and Software, 197, Article ID 111549.
Open this publication in new window or tab >>Automatic creation of acceptance tests by extracting conditionals from requirements: NLP approach and case study
Show others...
2023 (English)In: Journal of Systems and Software, ISSN 0164-1212, E-ISSN 1873-1228, Vol. 197, article id 111549Article in journal (Refereed) Published
Abstract [en]

Acceptance testing is crucial to determine whether a system fulfills end-user requirements. However, the creation of acceptance tests is a laborious task entailing two major challenges: (1) practitioners need to determine the right set of test cases that fully covers a requirement, and (2) they need to create test cases manually due to insufficient tool support. Existing approaches for automatically deriving test cases require semi-formal or even formal notations of requirements, though unrestricted natural language is prevalent in practice. In this paper, we present our tool-supported approach CiRA (Conditionals in Requirements Artifacts) capable of creating the minimal set of required test cases from conditional statements in informal requirements. We demonstrate the feasibility of CiRA in a case study with three industry partners. In our study, out of 578 manually created test cases, 71.8% can be generated automatically. Additionally, CiRA discovered 80 relevant test cases that were missed in manual test case design. CiRA is publicly available at www.cira.bth.se/demo/. © 2022

Place, publisher, year, edition, pages
Elsevier, 2023
Keywords
Acceptance testing, Automatic test case creation, Causality extraction, Natural language processing, Requirements engineering, Natural language processing systems, Software testing, Automatic creations, Case-studies, Language processing, Natural languages, Requirement engineering, Test case, Acceptance tests
National Category
Software Engineering
Identifiers
urn:nbn:se:bth-24047 (URN)10.1016/j.jss.2022.111549 (DOI)000926985500008 ()2-s2.0-85142730522 (Scopus ID)
Funder
Knowledge Foundation, 20180010
Available from: 2022-12-12 Created: 2022-12-12 Last updated: 2023-03-09Bibliographically approved
Frattini, J., Fischbach, J., Mendez, D., Unterkalmsteiner, M., Vogelsang, A. & Wnuk, K. (2023). Causality in requirements artifacts: prevalence, detection, and impact. Requirements Engineering, 28(1), 49-74
Open this publication in new window or tab >>Causality in requirements artifacts: prevalence, detection, and impact
Show others...
2023 (English)In: Requirements Engineering, ISSN 0947-3602, E-ISSN 1432-010X, Vol. 28, no 1, p. 49-74Article in journal (Refereed) Published
Abstract [en]

Causal relations in natural language (NL) requirements convey strong, semantic information. Automatically extracting such causal information enables multiple use cases, such as test case generation, but it also requires to reliably detect causal relations in the first place. Currently, this is still a cumbersome task as causality in NL requirements is still barely understood and, thus, barely detectable. In our empirically informed research, we aim at better understanding the notion of causality and supporting the automatic extraction of causal relations in NL requirements. In a first case study, we investigate 14.983 sentences from 53 requirements documents to understand the extent and form in which causality occurs. Second, we present and evaluate a tool-supported approach, called CiRA, for causality detection. We conclude with a second case study where we demonstrate the applicability of our tool and investigate the impact of causality on NL requirements. The first case study shows that causality constitutes around 28 % of all NL requirements sentences. We then demonstrate that our detection tool achieves a macro-F 1 score of 82 % on real-world data and that it outperforms related approaches with an average gain of 11.06 % in macro-Recall and 11.43 % in macro-Precision. Finally, our second case study corroborates the positive correlations of causality with features of NL requirements. The results strengthen our confidence in the eligibility of causal relations for downstream reuse, while our tool and publicly available data constitute a first step in the ongoing endeavors of utilizing causality in RE and beyond. © 2022, The Author(s).

Place, publisher, year, edition, pages
Springer Science+Business Media B.V., 2023
Keywords
Causality, Multi-case study, Natural language processing, Requirements engineering, Semantics, Automatic extraction, Case-studies, Causal relations, Multiple use-cases, Natural language requirements, Requirement engineering, Requirements document, Semantics Information, Test case generation, Natural language processing systems
National Category
Computer Sciences
Identifiers
urn:nbn:se:bth-22673 (URN)10.1007/s00766-022-00371-x (DOI)000753242500002 ()2-s2.0-85124567603 (Scopus ID)
Note

open access

Available from: 2022-02-25 Created: 2022-02-25 Last updated: 2023-06-19Bibliographically approved
Frattini, J., Montgomery, L., Fucci, D., Fischbach, J., Unterkalmsteiner, M. & Mendez, D. (2023). Let’s Stop Building at the Feet of Giants: Recovering unavailable Requirements Quality Artifacts. In: Ferrari A., Penzenstadler B., Penzenstadler B., Hadar I., Oyedeji S., Abualhaija S., Vogelsang A., Deshpande G., Rachmann A., Gulden J., Wohlgemuth A., Hess A., Fricker S., Guizzardi R., Horkoff J., Perini A., Susi A., Karras O., Dalpiaz F., Moreira A., Amyot D., Spoletini P. (Ed.), CEUR Workshop Proceedings: . Paper presented at Joint of REFSQ-2023 Workshops, Doctoral Symposium, Posters and Tools Track and Journal Early Feedback, Barcelona, REFSQ-JP 2023, 17 April 2023 through 20 April 2023. CEUR-WS, 3378
Open this publication in new window or tab >>Let’s Stop Building at the Feet of Giants: Recovering unavailable Requirements Quality Artifacts
Show others...
2023 (English)In: CEUR Workshop Proceedings / [ed] Ferrari A., Penzenstadler B., Penzenstadler B., Hadar I., Oyedeji S., Abualhaija S., Vogelsang A., Deshpande G., Rachmann A., Gulden J., Wohlgemuth A., Hess A., Fricker S., Guizzardi R., Horkoff J., Perini A., Susi A., Karras O., Dalpiaz F., Moreira A., Amyot D., Spoletini P., CEUR-WS , 2023, Vol. 3378Conference paper, Published paper (Refereed)
Abstract [en]

Requirements quality literature abounds with publications presenting artifacts, such as data sets and tools. However, recent systematic studies show that more than 80% of these artifacts have become unavailable or were never made public, limiting reproducibility and reusability. In this work, we report on an attempt to recover those artifacts. To that end, we requested corresponding authors of unavailable artifacts to recover and disclose them according to open science principles. Our results, based on 19 answers from 35 authors (54% response rate), include an assessment of the availability of requirements quality artifacts and a breakdown of authors’ reasons for their continued unavailability. Overall, we improved the availability of seven data sets and seven implementations. © 2023 Copyright for this paper by its authors. Use permitted under Creative Commons License Attribution 4.0 International (CC BY 4.0).

Place, publisher, year, edition, pages
CEUR-WS, 2023
Series
CEUR Workshop Proceedings, E-ISSN 1613-0073 ; 3378
Keywords
artifacts, availability, data set, open science, requirements quality, Recovery, Requirements engineering, Artifact, Data tools, Reproducibilities, Requirement quality, Response rate, Systematic study, Reusability
National Category
Software Engineering
Identifiers
urn:nbn:se:bth-24622 (URN)2-s2.0-85159033451 (Scopus ID)
Conference
Joint of REFSQ-2023 Workshops, Doctoral Symposium, Posters and Tools Track and Journal Early Feedback, Barcelona, REFSQ-JP 2023, 17 April 2023 through 20 April 2023
Available from: 2023-05-26 Created: 2023-05-26 Last updated: 2023-05-26Bibliographically approved
Badampudi, D., Unterkalmsteiner, M. & Britto, R. (2023). Modern Code Reviews - Survey of Literature and Practice. ACM Transactions on Software Engineering and Methodology, 32(4), Article ID 107.
Open this publication in new window or tab >>Modern Code Reviews - Survey of Literature and Practice
2023 (English)In: ACM Transactions on Software Engineering and Methodology, ISSN 1049-331X, E-ISSN 1557-7392, Vol. 32, no 4, article id 107Article, review/survey (Refereed) Published
Abstract [en]

Background: Modern Code Review (MCR) is a lightweight alternative to traditional code inspections. While secondary studies on MCR exist, it is uanknown whether the research community has targeted themes that practitioners consider important.Objectives: The objectives are to provide an overview of MCR research, analyze the practitioners' opinions on the importance of MCR research, investigate the alignment between research and practice, and propose future MCR research avenues.Method: We conducted a systematic mapping study to survey state of the art until and including 2021, employed the Q-Methodology to analyze the practitioners' perception of the relevance of MCR research, and analyzed the primary studies' research impact.Results: We analyzed 244 primary studies, resulting in five themes. As a result of the 1,300 survey data points, we found that the respondents are positive about research investigating the impact of MCR on product quality and MCR process properties. In contrast, they are negative about human factor- and support systems-related research.Conclusion: These results indicate a misalignment between the state of the art and the themes deemed important by most survey respondents. Researchers should focus on solutions that can improve the state of MCR practice. We provide an MCR research agenda that can potentially increase the impact of MCR research. © 2023 Copyright held by the owner/author(s).

Place, publisher, year, edition, pages
Association for Computing Machinery (ACM), 2023
Keywords
literature survey, Modern code review, practitioner survey, Code inspections, Code review, Practitioner surveys, Research analysis, Research communities, Research impacts, State of the art, Systematic mapping studies, Codes (symbols)
National Category
Software Engineering
Identifiers
urn:nbn:se:bth-25220 (URN)10.1145/3585004 (DOI)001020441100026 ()2-s2.0-85163852501 (Scopus ID)
Funder
Knowledge Foundation, 20180010Knowledge Foundation, 20190081
Available from: 2023-08-07 Created: 2023-08-07 Last updated: 2023-08-17Bibliographically approved
Frattini, J., Montgomery, L., Fischbach, J., Mendez, D., Fucci, D. & Unterkalmsteiner, M. (2023). Requirements quality research: a harmonized theory, evaluation, and roadmap. Requirements Engineering, 28(4), 507-520
Open this publication in new window or tab >>Requirements quality research: a harmonized theory, evaluation, and roadmap
Show others...
2023 (English)In: Requirements Engineering, ISSN 0947-3602, E-ISSN 1432-010X, Vol. 28, no 4, p. 507-520Article in journal (Refereed) Published
Abstract [en]

High-quality requirements minimize the risk of propagating defects to later stages of the software development life cycle. Achieving a sufficient level of quality is a major goal of requirements engineering. This requires a clear definition and understanding of requirements quality. Though recent publications make an effort at disentangling the complex concept of quality, the requirements quality research community lacks identity and clear structure which guides advances and puts new findings into an holistic perspective. In this research commentary, we contribute (1) a harmonized requirements quality theory organizing its core concepts, (2) an evaluation of the current state of requirements quality research, and (3) a research roadmap to guide advancements in the field. We show that requirements quality research focuses on normative rules and mostly fails to connect requirements quality to its impact on subsequent software development activities, impeding the relevance of the research. Adherence to the proposed requirements quality theory and following the outlined roadmap will be a step toward amending this gap. © 2023, The Author(s).

Place, publisher, year, edition, pages
Springer Science+Business Media B.V., 2023
Keywords
Requirements quality, Survey, Theory, Life cycle, Software design, High quality, Late stage, Quality requirements, Quality theory, Requirement engineering, Requirement quality, Research communities, Roadmap, Software development life-cycle, Quality control
National Category
Software Engineering
Identifiers
urn:nbn:se:bth-25327 (URN)10.1007/s00766-023-00405-y (DOI)001046952900001 ()2-s2.0-85167790788 (Scopus ID)
Funder
Knowledge Foundation, 20180010
Available from: 2023-08-25 Created: 2023-08-25 Last updated: 2023-12-31Bibliographically approved
Silva, L., Unterkalmsteiner, M. & Wnuk, K. (2023). Towards identifying and minimizing customer-facing documentation debt. In: Proceedings - 2023 ACM/IEEE International Conference on Technical Debt, TechDebt 2023: . Paper presented at 6th International Conference on Technical Debt, TechDebt 2023, Melbourne, Australia, 14 May 2023 through 15 May 2023 (pp. 72-81). Institute of Electrical and Electronics Engineers (IEEE)
Open this publication in new window or tab >>Towards identifying and minimizing customer-facing documentation debt
2023 (English)In: Proceedings - 2023 ACM/IEEE International Conference on Technical Debt, TechDebt 2023, Institute of Electrical and Electronics Engineers (IEEE), 2023, p. 72-81Conference paper, Published paper (Refereed)
Abstract [en]

Background: Software documentation often struggles to catch up with the pace of software evolution. The lack of correct, complete, and up-to-date documentation results in an increasing number of documentation defects which could introduce delays in integrating software systems. In our previous study on a bug analysis tool called MultiDimEr, we provided evidence that documentation-related defects contribute to a significant number of bug reports.

Aims: First, we want to identify documentation defect types contributing to documentation defects and thereby identifying documentation debt. Secondly, we aim to find pragmatic solutions to minimize most common documentation defects to pay off the documentation debt in the long run.

Method: We investigated documentation defects related to an industrial software system. First, we looked at the types of different documentation and associated bug reports. We categorized the defects according to an existing documentation defect taxonomy.

Results: Based on a sample of 101 defects, we found that a majority of defects are caused by documentation defects falling into the Information Content (What) category (86). Within this category, the documentation defect types Erroneous code examples (23), Missing documentation (35), and Outdated content (19) contributed to most of the documentation defects. We propose to adapt two solutions to mitigate these types of documentation defects.

Conclusions: In practice, documentation debt can easily go undetected since a large share of resources and focus is dedicated to deliver high-quality software. This study provides evidence that documentation debt can contribute to increase in maintenance costs due to the number of documentation defects. We suggest to adapt two main solutions to tackle documentation debt by implementing (i) Dynamic Documentation Generation (DDG) and/or (ii) Automated Documentation Testing (ADT), which are both based on defining a single and robust information source for documentation.

Place, publisher, year, edition, pages
Institute of Electrical and Electronics Engineers (IEEE), 2023
Keywords
Documentation Debt, Technical Debt, Automation
National Category
Software Engineering
Research subject
Software Engineering
Identifiers
urn:nbn:se:bth-24654 (URN)10.1109/TechDebt59074.2023.00015 (DOI)001051233000009 ()2-s2.0-85169420574 (Scopus ID)
Conference
6th International Conference on Technical Debt, TechDebt 2023, Melbourne, Australia, 14 May 2023 through 15 May 2023
Available from: 2023-05-30 Created: 2023-05-30 Last updated: 2023-09-15Bibliographically approved
Frattini, J., Lloyd, M., Jannik, F., Unterkalmsteiner, M., Mendez, D. & Fucci, D. (2022). A Live Extensible Ontology of Quality Factors for Textual Requirements. In: Knauss E., Mussbacher G., Arora C., Bano M., Schneider (Ed.), Proceedings of the IEEE International Conference on Requirements Engineering: . Paper presented at 30th IEEE International Requirements Engineering Conference, RE 2022, Mon 15 - Sat 20 August 2022, Melbourne, Australia (pp. 274-280). IEEE
Open this publication in new window or tab >>A Live Extensible Ontology of Quality Factors for Textual Requirements
Show others...
2022 (English)In: Proceedings of the IEEE International Conference on Requirements Engineering / [ed] Knauss E., Mussbacher G., Arora C., Bano M., Schneider, IEEE, 2022, p. 274-280Conference paper, Published paper (Refereed)
Abstract [en]

Quality factors like passive voice or sentence length are commonly used in research and practice to evaluate the quality of natural language requirements since they indicate defects in requirements artifacts that potentially propagate to later stages in the development life cycle. However, as a research community, we still lack a holistic perspective on quality factors. This inhibits not only a comprehensive understanding of the existing body of knowledge but also the effective use and evolution of these factors. To this end, we propose an ontology of quality factors for textual requirements, which includes (1) a structure framing quality factors and related elements and (2) a central repository and web interface making these factors publicly accessible and usable. We contribute the first version of both by applying a rigorous ontology development method to 105 eligible primary studies and construct a first version of the repository and interface. We illustrate the usability of the ontology and invite fellow researchers to a joint community effort to complete and maintain this knowledge repository. We envision our ontology to reflect the community's harmonized perception of requirements quality factors, guide reporting of new quality factors, and provide central access to the current body of knowledge.

Place, publisher, year, edition, pages
IEEE, 2022
Keywords
requirements engineering, requirements quality, quality factor, ontology
National Category
Software Engineering
Identifiers
urn:nbn:se:bth-23733 (URN)10.1109/RE54965.2022.00041 (DOI)000931050900034 ()s2.0-85140969591 (Scopus ID)9781665470001 (ISBN)
Conference
30th IEEE International Requirements Engineering Conference, RE 2022, Mon 15 - Sat 20 August 2022, Melbourne, Australia
Funder
Knowledge Foundation, 20180010
Note

open access

Available from: 2022-10-07 Created: 2022-10-07 Last updated: 2023-03-16Bibliographically approved
Fagerholm, F., Felderer, M., Fucci, D., Unterkalmsteiner, M., Marculescu, B., Martini, M., . . . Khattak, J. (2022). Cognition in Software Engineering: A Taxonomy and Survey of a Half-Century of Research. ACM Computing Surveys, 54(11)
Open this publication in new window or tab >>Cognition in Software Engineering: A Taxonomy and Survey of a Half-Century of Research
Show others...
2022 (English)In: ACM Computing Surveys, ISSN 0360-0300, E-ISSN 1557-7341, Vol. 54, no 11Article in journal (Refereed) Published
Abstract [en]

Cognition plays a fundamental role in most software engineering activities. This article provides a taxonomy of cognitive concepts and a survey of the literature since the beginning of the Software Engineering discipline. The taxonomy comprises the top-level concepts of perception, attention, memory, cognitive load, reasoning, cognitive biases, knowledge, social cognition, cognitive control, and errors, and procedures to assess them both qualitatively and quantitatively. The taxonomy provides a useful tool to filter existing studies, classify new studies, and support researchers in getting familiar with a (sub) area. In the literature survey, we systematically collected and analysed 311 scientific papers spanning five decades and classified them using the cognitive concepts from the taxonomy. Our analysis shows that the most developed areas of research correspond to the four life-cycle stages, software requirements, design, construction, and maintenance. Most research is quantitative and focuses on knowledge, cognitive load, memory, and reasoning. Overall, the state of the art appears fragmented when viewed from the perspective of cognition. There is a lack of use of cognitive concepts that would represent a coherent picture of the cognitive processes active in specific tasks. Accordingly, we discuss the research gap in each cognitive concept and provide recommendations for future research.

Place, publisher, year, edition, pages
ACM Digital Library, 2022
Keywords
Cognition, cognitive concepts, psychology of programming, human factors, measurement, taxonomy
National Category
Software Engineering
Identifiers
urn:nbn:se:bth-23177 (URN)10.1145/3508359 (DOI)000886929000001 ()
Note

open access

Available from: 2022-06-16 Created: 2022-06-16 Last updated: 2023-06-30Bibliographically approved
Projects
D-CAT – Digital Collaboration and Automized Tracing Of Information; Blekinge Institute of Technology; Publications
Unterkalmsteiner, M. & Abdeen, W. (2023). A compendium and evaluation of taxonomy quality attributes. Expert systems (Print), 40(1), Article ID e13098. Abdeen, W., Chen, X. & Unterkalmsteiner, M. (2023). An approach for performance requirements verification and test environments generation. Requirements Engineering, 28(1), 117-144Abdeen, W. (2023). Taxonomic Trace Links Recommender: Context Aware Hierarchical Classification. In: Ferrari A., Penzenstadler B., Penzenstadler B., Hadar I., Oyedeji S., Abualhaija S., Vogelsang A., Deshpande G., Rachmann A., Gulden J., Wohlgemuth A., Hess A., Fricker S., Guizzardi R., Horkoff J., Perini A., Susi A., Karras O., Dalpiaz F., Moreira A., Amyot D., Spoletini P. (Ed.), CEUR Workshop Proceedings: . Paper presented at Joint of REFSQ-2023 Workshops, Doctoral Symposium, Posters and Tools Track and Journal Early Feedback, REFSQ-JP 2023, Barcelona, 17 April 2023 through 20 April 2023. CEUR-WS, 3378
Organisations
Identifiers
ORCID iD: ORCID iD iconorcid.org/0000-0003-4118-0952

Search in DiVA

Show all publications