Change search
Refine search result
123 1 - 50 of 106
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Rows per page
  • 5
  • 10
  • 20
  • 50
  • 100
  • 250
Sort
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
Select
The maximal number of hits you can export is 250. When you want to export more records please use the Create feeds function.
  • 1. Afzal, Wasif
    et al.
    Torkar, Richard
    Feldt, Robert
    A systematic review of search-based testing for non-functional system properties2009In: Information and Software Technology, ISSN 0950-5849, E-ISSN 1873-6025, Vol. 51, no 6, p. 957-976Article in journal (Refereed)
    Abstract [en]

    Search-based software testing is the application of metaheuristic search techniques to generate software tests. The test adequacy criterion is transformed into a fitness function and a set of solutions in the search space are evaluated with respect to the fitness function using a metaheuristic search technique. The application of metaheuristic search techniques for testing is promising due to the fact that exhaustive testing is infeasible considering the size and complexity of software under test. Search-based software testing has been applied across the spectrum of test case design methods; this includes white-box (structural), black-box (functional) and grey-box (combination of structural and functional) testing. In addition, metaheuristic search techniques have also been applied to test non-functional properties. The overall objective of undertaking this systematic review is to examine existing work into non-functional search-based software testing (NFSBST). We are interested in types of non-functional testing targeted using metaheuristic search techniques, different fitness functions used in different types of search-based non-functional testing and challenges in the application of these techniques. The systematic review is based on a comprehensive set of 35 articles obtained after a multi-stage selection process and have been published in the time span 1996-2007. The results of the review show that metaheuristic search techniques have been applied for non-functional testing of execution time, quality of service, security, usability and safety. A variety of metaheuristic search techniques are found to be applicable for non-functional testing including simulated annealing, tabu search, genetic algorithms, ant colony methods, grammatical evolution, genetic programming (and its variants including linear genetic programming) and swarm intelligence methods. The review reports on different fitness functions used to guide the search for each of the categories of execution time, safety, usability, quality of service and security; along with a discussion of possible challenges in the application of metaheuristic search techniques.

  • 2.
    Alahyari, Hiva
    et al.
    Chalmers, SWE.
    Gorschek, Tony
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Berntsson Svensson, Richard
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    An exploratory study of waste in software development organizations using agile or lean approaches: A multiple case study at 14 organizations2019In: Information and Software Technology, ISSN 0950-5849, E-ISSN 1873-6025, Vol. 107, p. 78-94Article in journal (Refereed)
    Abstract [en]

    Context: The principal focus of lean is the identification and elimination of waste from the process with respect to maximizing customer value. Similarly, the purpose of agile is to maximize customer value and minimize unnecessary work and time delays. In both cases the concept of waste is important. Through an empirical study, we explore how waste is approached in agile software development organizations. Objective: This paper explores the concept of waste in agile/lean software development organizations and how it is defined, used, prioritized, reduced, or eliminated in practice Method: The data were collected using semi-structured open-interviews. 23 practitioners from 14 embedded software development organizations were interviewed representing two core roles in each organization. Results: Various wastes, categorized in 10 different categories, were identified by the respondents. From the mentioned wastes, not all were necessarily waste per se but could be symptoms caused by wastes. From the seven wastes of lean, Task-switching was ranked as the most important, and Extra-features, as the least important wastes according to the respondents’ opinion. However, most companies do not have their own or use an established definition of waste, more importantly, very few actively identify or try to eliminate waste in their organizations beyond local initiatives on project level. Conclusion: In order to identify, recognize and eliminate waste, a common understanding, and a joint and holistic view of the concept is needed. It is also important to optimize the whole organization and the whole product, as waste on one level can be important on another, thus sub-optimization should be avoided. Furthermore, to achieve a sustainable and effective waste handling, both the short-term and the long-term perspectives need to be considered. © 2018 Elsevier B.V.

  • 3.
    Alegroth, Emil
    et al.
    Chalmers University of Technology.
    Feldt, Robert
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Kolstrom, Pirjo
    Saab Sensis ATM Sweden.
    Maintenance of automated test suites in industry: An empirical study on Visual GUI Testing2016In: Information and Software Technology, ISSN 0950-5849, E-ISSN 1873-6025, Vol. 73, p. 66-80Article in journal (Refereed)
    Abstract [en]

    Context: Verification and validation (V&V) activities make up 20-50% of the total development costs of a software system in practice. Test automation is proposed to lower these V&V costs but available research only provides limited empirical data from industrial practice about the maintenance costs of automated tests and what factors affect these costs. In particular, these costs and factors are unknown for automated GUI-based testing. Objective: This paper addresses this lack of knowledge through analysis of the costs and factors associated with the maintenance of automated GUI-based tests in industrial practice. Method: An empirical study at two companies, Siemens and Saab, is reported where interviews about, and empirical work with, Visual GUI Testing is performed to acquire data about the technique's maintenance costs and feasibility. Results: 13 factors are observed that affect maintenance, e.g. tester knowledge/experience and test case complexity. Further, statistical analysis shows that developing new test scripts is costlier than maintenance but also that frequent maintenance is less costly than infrequent, big bang maintenance. In addition a cost model, based on previous work, is presented that estimates the time to positive return on investment (ROI) of test automation compared to manual testing. Conclusions: It is concluded that test automation can lower overall software development costs of a project while also having positive effects on software quality. However, maintenance costs can still be considerable and the less time a company currently spends on manual testing, the more time is required before positive, economic, ROI is reached after automation. (C) 2016 Elsevier B.V. All rights reserved.

  • 4.
    Ali, Nauman bin
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Usman, Muhammad
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    A critical appraisal tool for systematic literature reviews in software engineering2019In: Information and Software Technology, ISSN 0950-5849, E-ISSN 1873-6025, Vol. 112, p. 48-50Article, review/survey (Refereed)
    Abstract [en]

    Context: Methodological research on systematic literature reviews (SLRs)in Software Engineering (SE)has so far focused on developing and evaluating guidelines for conducting systematic reviews. However, the support for quality assessment of completed SLRs has not received the same level of attention. Objective: To raise awareness of the need for a critical appraisal tool (CAT)for assessing the quality of SLRs in SE. To initiate a community-based effort towards the development of such a tool. Method: We reviewed the literature on the quality assessment of SLRs to identify the frequently used CATs in SE and other fields. Results: We identified that the CATs currently used is SE were borrowed from medicine, but have not kept pace with substantial advancements in the field of medicine. Conclusion: In this paper, we have argued the need for a CAT for quality appraisal of SLRs in SE. We have also identified a tool that has the potential for application in SE. Furthermore, we have presented our approach for adapting this state-of-the-art CAT for assessing SLRs in SE. © 2019 The Authors

    Download full text (pdf)
    fulltext
  • 5.
    Ali, Nauman bin
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Usman, Muhammad
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Reliability of search in systematic reviews: Towards a quality assessment framework for the automated-search strategy2018In: Information and Software Technology, ISSN 0950-5849, E-ISSN 1873-6025, ISSN 0950-5849, Vol. 99, p. 133-147Article in journal (Refereed)
    Abstract [en]

    Context: The trust in systematic literature reviews (SLRs) to provide credible recommendations is critical for establishing evidence-based software engineering (EBSE) practice. The reliability of SLR as a method is not a given and largely depends on the rigor of the attempt to identify, appraise and aggregate evidence. Previous research, by comparing SLRs on the same topic, has identified search as one of the reasons for discrepancies in the included primary studies. This affects the reliability of an SLR, as the papers identified and included in it are likely to influence its conclusions. Objective: We aim to propose a comprehensive evaluation checklist to assess the reliability of an automated-search strategy used in an SLR. Method: Using a literature review, we identified guidelines for designing and reporting automated-search as a primary search strategy. Using the aggregated design, reporting and evaluation guidelines, we formulated a comprehensive evaluation checklist. The value of this checklist was demonstrated by assessing the reliability of search in 27 recent SLRs. Results: Using the proposed evaluation checklist, several additional issues (not captured by the current evaluation checklist) related to the reliability of search in recent SLRs were identified. These issues severely limit the coverage of literature by the search and also the possibility to replicate it. Conclusion: Instead of solely relying on expensive replications to assess the reliability of SLRs, this work provides means to objectively assess the likely reliability of a search-strategy used in an SLR. It highlights the often-assumed aspect of repeatability of search when using automated-search. Furthermore, by explicitly considering repeatability and consistency as sub-characteristics of a reliable search, it provides a more comprehensive evaluation checklist than the ones currently used in EBSE. © 2018 Elsevier B.V.

  • 6.
    Alkharabsheh, Khalid
    et al.
    Al-Balqa Applied University, JOR.
    Alawadi, Sadi
    Uppsala University, SWE.
    Kebande, Victor R.
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Crespo, Yania
    Universidad de Valladolid, ESP.
    Fernández-Delgado, Manuel
    Universidad de Santiago de Compostela, ESP.
    Taboada, José A.
    Universidad de Santiago de Compostela, ESP.
    A comparison of machine learning algorithms on design smell detection using balanced and imbalanced dataset: A study of God class2022In: Information and Software Technology, ISSN 0950-5849, E-ISSN 1873-6025, Vol. 143, article id 106736Article in journal (Refereed)
    Abstract [en]

    Context: Design smell detection has proven to be a significant activity that has an aim of not only enhancing the software quality but also increasing its life cycle. Objective: This work investigates whether machine learning approaches can effectively be leveraged for software design smell detection. Additionally, this paper provides a comparatively study, focused on using balanced datasets, where it checks if avoiding dataset balancing can be of any influence on the accuracy and behavior during design smell detection. Method: A set of experiments have been conducted-using 28 Machine Learning classifiers aimed at detecting God classes. This experiment was conducted using a dataset formed from 12,587 classes of 24 software systems, in which 1,958 classes were manually validated. Results: Ultimately, most classifiers obtained high performances,-with Cat Boost showing a higher performance. Also, it is evident from the experiments conducted that data balancing does not have any significant influence on the accuracy of detection. This reinforces the application of machine learning in real scenarios where the data is usually imbalanced by the inherent nature of design smells. Conclusions: Machine learning approaches can effectively be used as a leverage for God class detection. While in this paper we have employed SMOTE technique for data balancing, it is worth noting that there exist other methods of data balancing and with other design smells. Furthermore, it is also important to note that application of those other methods may improve the results, in our experiments SMOTE did not improve God class detection. The results are not fully generalizable because only one design smell is studied with projects developed in a single programming language, and only one balancing technique is used to compare with the imbalanced case. But these results are promising for the application in real design smells detection scenarios as mentioned above and the focus on other measures, such as Kappa, ROC, and MCC, have been used in the assessment of the classifier behavior. © 2021 The Authors

    Download full text (pdf)
    fulltext
  • 7.
    Auer, Florian
    et al.
    University of Innsbruck, AUT.
    Lenarduzzi, Valentina
    LUT University, FIN.
    Felderer, Michael
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Taibi, Davide
    Tampere University, FIN.
    From monolithic systems to Microservices: An assessment framework2021In: Information and Software Technology, ISSN 0950-5849, E-ISSN 1873-6025, Vol. 137, article id 106600Article in journal (Refereed)
    Abstract [en]

    Context: Re-architecting monolithic systems with Microservices-based architecture is a common trend. Various companies are migrating to Microservices for different reasons. However, making such an important decision like re-architecting an entire system must be based on real facts and not only on gut feelings. Objective: The goal of this work is to propose an evidence-based decision support framework for companies that need to migrate to Microservices, based on the analysis of a set of characteristics and metrics they should collect before re-architecting their monolithic system. Method: We conducted a survey done in the form of interviews with professionals to derive the assessment framework based on Grounded Theory. Results: We identified a set consisting of information and metrics that companies can use to decide whether to migrate to Microservices or not. The proposed assessment framework, based on the aforementioned metrics, could be useful for companies if they need to migrate to Microservices and do not want to run the risk of failing to consider some important information. © 2021 The Author(s)

    Download full text (pdf)
    fulltext
  • 8.
    Auer, Florian
    et al.
    University of Innsbruck, AUT.
    Ros, Rasmus
    Lund University, SWE.
    Kaltenbrunner, Lukas
    University of Innsbruck, AUT.
    Runeson, Per
    Lund University, SWE.
    Felderer, Michael
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Controlled experimentation in continuous experimentation: Knowledge and challenges2021In: Information and Software Technology, ISSN 0950-5849, E-ISSN 1873-6025, Vol. 134, article id 106551Article in journal (Refereed)
    Abstract [en]

    Context: Continuous experimentation and A/B testing is an established industry practice that has been researched for more than 10 years. Our aim is to synthesize the conducted research. Objective: We wanted to find the core constituents of a framework for continuous experimentation and the solutions that are applied within the field. Finally, we were interested in the challenges and benefits reported of continuous experimentation. Methods: We applied forward snowballing on a known set of papers and identified a total of 128 relevant papers. Based on this set of papers we performed two qualitative narrative syntheses and a thematic synthesis to answer the research questions. Results: The framework constituents for continuous experimentation include experimentation processes as well as supportive technical and organizational infrastructure. The solutions found in the literature were synthesized to nine themes, e.g. experiment design, automated experiments, or metric specification. Concerning the challenges of continuous experimentation, the analysis identified cultural, organizational, business, technical, statistical, ethical, and domain-specific challenges. Further, the study concludes that the benefits of experimentation are mostly implicit in the studies. Conclusion: The research on continuous experimentation has yielded a large body of knowledge on experimentation. The synthesis of published research presented within include recommended infrastructure and experimentation process models, guidelines to mitigate the identified challenges, and what problems the various published solutions solve. © 2021 The Authors

    Download full text (pdf)
    fulltext
  • 9. Aurum, Aybüke
    et al.
    Wohlin, Claes
    The Fundamental Nature of Requirements Engineering Activities as a Decision-Making Process2003In: Information and Software Technology, ISSN 0950-5849, E-ISSN 1873-6025, Vol. 45, no 14, p. 945-954Article in journal (Refereed)
    Abstract [en]

    The requirements engineering (RE) process is a decision-rich complex problem solving activity. This paper examines the elements of organization-oriented macro decisions as well as process-oriented micro decisions in the RE process and illustrates how to integrate classical decision-making models with RE process models. This integration helps in formulating a common vocabulary and model to improve the manageability of the RE process, and contributes towards the learning process by validating and verifying the consistency of decision-making in RE activities.

  • 10.
    Barney, Sebastian
    et al.
    Blekinge Institute of Technology, School of Computing.
    Mohankumar, Varun
    Chatzipetrou, Panagiota
    Aurum, Aybüke
    Wohlin, Claes
    Blekinge Institute of Technology, School of Computing.
    Angelis, Lefteris
    Software quality across borders: Three case studies on company internal alignment2014In: Information and Software Technology, ISSN 0950-5849, E-ISSN 1873-6025, Vol. 56, no 1, p. 20-38Article in journal (Refereed)
    Abstract [en]

    Software quality issues are commonly reported when offshoring software development. Value-based software engineering addresses this by ensuring key stakeholders have a common understanding of quality. Objective: This work seeks to understand the levels of alignment between key stakeholder groups within a company on the priority given to aspects of software quality developed as part of an offshoring relationship. Furthermore, the study aims to identify factors impacting the levels of alignment identified. Method: Three case studies were conducted, with representatives of key stakeholder groups ranking aspects of software quality in a hierarchical cumulative exercise. The results are analysed using Spearman rank correlation coefficients and inertia. The results were discussed with the groups to gain a deeper understanding of the issues impacting alignment. Results: Various levels of alignment were found between the various groups. The reasons for misalignment were found to include cultural factors, control of quality in the development process, short-term versus long-term orientations, understanding of cost-benefits of quality improvements, communication and coordination. Conclusions: The factors that negatively affect alignment can vary greatly between different cases. The work emphasises the need for greater support to align company internal success-critical stakeholder groups in their understanding of quality when offshoring software development.

  • 11.
    Barney, Sebastian
    et al.
    Blekinge Institute of Technology, School of Computing.
    Petersen, Kai
    Blekinge Institute of Technology, School of Computing.
    Svahnberg, Mikael
    Blekinge Institute of Technology, School of Computing.
    Aurum, Aybueke
    Barney, Hamish
    Software quality trade-offs: A systematic map2012In: Information and Software Technology, ISSN 0950-5849, E-ISSN 1873-6025, Vol. 54, no 7, p. 651-662Article, review/survey (Refereed)
    Abstract [en]

    Background: Software quality is complex with over investment, under investment and the interplay between aspects often being overlooked as many researchers aim to advance individual aspects of software quality. Aim: This paper aims to provide a consolidated overview the literature that addresses trade-offs between aspects of software product quality. Method: A systematic literature map is employed to provide an overview of software quality trade-off literature in general. Specific analysis is also done of empirical literature addressing the topic. Results: The results show a wide range of solution proposals being considered. However, there is insufficient empirical evidence to adequately evaluate and compare these proposals. Further a very large vocabulary has been found to describe software quality. Conclusion: Greater empirical research is required to sufficiently evaluate and compare the wide range of solution proposals. This will allow researchers to focus on the proposals showing greater signs of success and better support industrial practitioners.

  • 12.
    Bauer, Andreas
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Coppola, Ricardo
    Politecnico di Torino, Italy.
    Alégroth, Emil
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Gorschek, Tony
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Code review guidelines for GUI-based testing artifacts2023In: Information and Software Technology, ISSN 0950-5849, E-ISSN 1873-6025, Vol. 163, article id 107299Article, review/survey (Refereed)
    Abstract [en]

    Context: Review of software artifacts, such as source or test code, is a common practice in industrial practice. However, although review guidelines are available for source and low-level test code, for GUI-based testing artifacts, such guidelines are missing. Objective: The goal of this work is to define a set of guidelines from literature about production and test code, that can be mapped to GUI-based testing artifacts. Method: A systematic literature review is conducted, using white and gray literature to identify guidelines for source and test code. These synthesized guidelines are then mapped, through examples, to create actionable, and applicable, guidelines for GUI-based testing artifacts. Results: The results of the study are 33 guidelines, summarized in nine guideline categories, that are successfully mapped as applicable to GUI-based testing artifacts. Of the collected literature, only 10 sources contained test-specific code review guidelines. These guideline categories are: perform automated checks, use checklists, provide context information, utilize metrics, ensure readability, visualize changes, reduce complexity, check conformity with the requirements and follow design principles and patterns. Conclusion: This pivotal set of guidelines provides an industrial contribution in filling the gap of general guidelines for review of GUI-based testing artifacts. Additionally, this work highlights, from an academic perspective, the need for future research in this area to also develop guidelines for other specific aspects of GUI-based testing practice, and to take into account other facets of the review process not covered by this work, such as reviewer selection. © 2023 The Author(s)

    Download full text (pdf)
    fulltext
  • 13.
    Bin Ali, Nauman
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering. Blekinge Inst Technol, Karlskrona, Sweden..
    Petersen, Kai
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering. Blekinge Inst Technol, Karlskrona, Sweden..
    Nicolau de Franca, Breno Bernard
    Univ Fed Rio de Janeiro, ESE Grp, PESC COPPE, BR-68511 Rio De Janeiro, Brazil..
    Evaluation of simulation-assisted value stream mapping for software product development: Two industrial cases2015In: Information and Software Technology, ISSN 0950-5849, E-ISSN 1873-6025, Vol. 68, p. 45-61Article in journal (Refereed)
    Abstract [en]

    Context: Value stream mapping (VSM) as a tool for lean development has led to significant improvements in different industries. In a few studies, it has been successfully applied in a software engineering context. However, some shortcomings have been observed in particular failing to capture the dynamic nature of the software process to evaluate improvements i.e. such improvements and target values are based on idealistic situations. Objective: To overcome the shortcomings of VSM by combining it with software process simulation modeling, and to provide reflections on the process of conducting VSM with simulation. Method: Using case study research, VSM was used for two products at Ericsson AB, Sweden. Ten workshops were conducted in this regard. Simulation in this study was used as a tool to support discussions instead of as a prediction tool. The results have been evaluated from the perspective of the participating practitioners, an external observer, and reflections of the researchers conducting the simulation that was elicited by the external observer. Results: Significant constraints hindering the product development from reaching the stated improvement goals for shorter lead time were identified. The use of simulation was particularly helpful in having more insightful discussions and to challenge assumptions about the likely impact of improvements. However, simulation results alone were found insufficient to emphasize the importance of reducing waiting times and variations in the process. Conclusion: The framework to assist VSM with simulation presented in this study was successfully applied in two cases. The involvement of various stakeholders, consensus building steps, emphasis on flow (through waiting time and variance analysis) and the use of simulation proposed in the framework led to realistic improvements with a high likelihood of implementation. (C) 2015 Elsevier B.V. All rights reserved.

  • 14. Bjarnason, Elizabeth
    et al.
    Unterkalmsteiner, Michael
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Borg, Markus
    Engström, Emelie
    A Multi-Case Study of Agile Requirements Engineering and the Use of Test Cases as Requirements2016In: Information and Software Technology, ISSN 0950-5849, E-ISSN 1873-6025, Vol. 77, p. 61-79Article in journal (Refereed)
    Abstract [en]

    [Context] It is an enigma that agile projects can succeed ‘without requirements’ when weak requirementsengineering is a known cause for project failures. While agile development projects often manage well withoutextensive requirements test cases are commonly viewed as requirements and detailed requirements are documented astest cases.[Objective] We have investigated this agile practice of using test cases as requirements to understand how test casescan support the main requirements activities, and how this practice varies.[Method] We performed an iterative case study at three companies and collected data through 14 interviews and 2focus groups.[Results] The use of test cases as requirements poses both benefits and challenges when eliciting, validating,verifying, and managing requirements, and when used as a documented agreement. We have identified five variants ofthe test-cases-as-requirements practice, namely de facto, behaviour-driven, story-test driven, stand-alone strict andstand-alone manual for which the application of the practice varies concerning the time frame of requirementsdocumentation, the requirements format, the extent to which the test cases are a machine executable specification andthe use of tools which provide specific support for the practice of using test cases as requirements.[Conclusions] The findings provide empirical insight into how agile development projects manage andcommunicate requirements. The identified variants of the practice of using test cases as requirements can be used toperform in-depth investigations into agile requirements engineering. Practitioners can use the providedrecommendations as a guide in designing and improving their agile requirements practices based on projectcharacteristics such as number of stakeholders and rate of change.

    Download full text (pdf)
    fulltext
  • 15.
    Borg, Markus
    et al.
    RISE Research Institutes of Sweden AB, SWE.
    Chatzipetrou, Panagiota
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Wnuk, Krzysztof
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Alégroth, Emil
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Gorschek, Tony
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Papatheocharous, Efi
    RISE Research Institutes of Sweden AB, SWE.
    Shah, Syed Muhammad Ali
    iZettle, SWE.
    Axelsson, Jakob
    RISE Research Institutes of Sweden AB, SWE.
    Selecting component sourcing options: A survey of software engineering's broader make-or-buy decisions2019In: Information and Software Technology, ISSN 0950-5849, E-ISSN 1873-6025, Vol. 112, p. 18-34Article in journal (Refereed)
    Abstract [en]

    Context: Component-based software engineering (CBSE) is a common approach to develop and evolve contemporary software systems. When evolving a system based on components, make-or-buy decisions are frequent, i.e., whether to develop components internally or to acquire them from external sources. In CBSE, several different sourcing options are available: (1) developing software in-house, (2) outsourcing development, (3) buying commercial-off-the-shelf software, and (4) integrating open source software components. Objective: Unfortunately, there is little available research on how organizations select component sourcing options (CSO) in industry practice. In this work, we seek to contribute empirical evidence to CSO selection. Method: We conduct a cross-domain survey on CSO selection in industry, implemented as an online questionnaire. Results: Based on 188 responses, we find that most organizations consider multiple CSOs during software evolution, and that the CSO decisions in industry are dominated by expert judgment. When choosing between candidate components, functional suitability acts as an initial filter, then reliability is the most important quality. Conclusion: We stress that future solution-oriented work on decision support has to account for the dominance of expert judgment in industry. Moreover, we identify considerable variation in CSO decision processes in industry. Finally, we encourage software development organizations to reflect on their decision processes when choosing whether to make or buy components, and we recommend using our survey for a first benchmarking. © 2019

    Download full text (pdf)
    fulltext
  • 16.
    Börstler, Jürgen
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Ali, Nauman bin
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Petersen, Kai
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering. University of Applied Sciences, Germany.
    Double-counting in software engineering tertiary studies — An overlooked threat to validity2023In: Information and Software Technology, ISSN 0950-5849, E-ISSN 1873-6025, Vol. 158, article id 107174Article, review/survey (Refereed)
    Abstract [en]

    Context: Double-counting in a literature review occurs when the same data, population, or evidence is erroneously counted multiple times during synthesis. Detecting and mitigating the threat of double-counting is particularly challenging in tertiary studies. Although this topic has received much attention in the health sciences, it seems to have been overlooked in software engineering. Objective: We describe issues with double-counting in tertiary studies, investigate the prevalence of the issue in software engineering, and propose ways to identify and address the issue. Method: We analyze 47 tertiary studies in software engineering to investigate in which ways they address double-counting and whether double-counting might be a threat to validity in them. Results: In 19 of the 47 tertiary studies, double-counting might bias their results. Of those 19 tertiary studies, only 5 consider double-counting a threat to their validity, and 7 suggest strategies to address the issue. Overall, only 9 of the 47 tertiary studies, acknowledge double-counting as a potential general threat to validity for tertiary studies. Conclusions: Double-counting is an overlooked issue in tertiary studies in software engineering, and existing design and evaluation guidelines do not address it sufficiently. Therefore, we propose recommendations that may help to identify and mitigate double-counting in tertiary studies. © 2023 The Author(s)

    Download full text (pdf)
    fulltext
  • 17.
    Börstler, Jürgen
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Ali, Nauman bin
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Petersen, Kai
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering. Flensburg University of Applied Sciences.
    Engström, Emelie
    Lund University.
    Acceptance behavior theories and models in software engineering — A mapping study2024In: Information and Software Technology, ISSN 0950-5849, E-ISSN 1873-6025, Vol. 172, article id 107469Article in journal (Refereed)
    Abstract [en]

    Context: The adoption or acceptance of new technologies or ways of working in software development activities is a recurrent topic in the software engineering literature. The topic has, therefore, been empirically investigated extensively. It is, however, unclear which theoretical frames of reference are used in this research to explain acceptance behaviors. Objective: In this study, we explore how major theories and models of acceptance behavior have been used in the software engineering literature to empirically investigate acceptance behavior.Method: We conduct a systematic mapping study of empirical studies using acceptance behavior theories in software engineering.Results: We identified 47 primary studies covering 56 theory uses. The theories were categorized into six groups. Technology acceptance models (TAM and its extensions) were used in 29 of the 47 primary studies, innovation theories in 10, and the theories of planned behavior/ reasoned action (TPB/TRA) in six. All other theories were used in at most two of the primary studies. The usage and operationalization of the theories were, in many cases, inconsistent with the underlying theories. Furthermore, we identified 77 constructs used by these studies of which many lack clear definitions. Conclusions: Our results show that software engineering researchers are aware of some of the leading theories and models of acceptance behavior, which indicates an attempt to have more theoretical foundations. However, we identified issues related to theory usage that make it difficult to aggregate and synthesize results across studies. We propose mitigation actions that encourage the consistent use of theories and emphasize the measurement of key constructs.

    Download full text (pdf)
    fulltext
  • 18.
    Chen, Xingru
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Usman, Muhammad
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Badampudi, Deepika
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Understanding and evaluating software reuse costs and benefits from industrial cases—A systematic literature review2024In: Information and Software Technology, ISSN 0950-5849, E-ISSN 1873-6025, Vol. 171, article id 107451Article, review/survey (Refereed)
    Abstract [en]

    Context: Software reuse costs and benefits have been investigated in several primary studies, which have been aggregated in multiple secondary studies as well. However, existing secondary studies on software reuse have not critically appraised the evidence in primary studies. Moreover, there has been relatively less focus on how software reuse costs and benefits were measured in the primary studies, and the aggregated evidence focuses more on software reuse benefits than reuse costs. Objective: This study aims to cover the gaps mentioned in the context above by synthesizing and critically appraising the evidence reported on software reuse costs and benefits from industrial cases. Method: We used a systematic literature review (SLR) to conduct this study. The results of this SLR are based on a final set of 30 primary studies. Results: We identified nine software reuse benefits and six software reuse costs, in which better quality and improved productivity were investigated the most. The primary studies mostly used defect-based and development time-based metrics to measure reuse benefits and costs. Regarding the reuse practices, the results show that software product lines, verbatim reuse, and systematic reuse were the top investigated ones, contributing to more reuse benefits. The quality assessment of the primary studies showed that most of them are either of low (20%) or moderate (67%) quality. Conclusion: Based on the number and quality of the studies, we conclude that the strength of evidence for better quality and improved productivity as reuse benefits is high. There is a need to conduct more high quality studies to investigate, not only other reuse costs and benefits, but also how relatively new reuse-related practices, such as InnerSource and microservices architecture, impact software reuse. © 2024 The Author(s)

    Download full text (pdf)
    fulltext
  • 19.
    Coppola, Riccardo
    et al.
    Politecnico di Torino, ITA.
    Alégroth, Emil
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    A taxonomy of metrics for GUI-based testing research: A systematic literature review2022In: Information and Software Technology, ISSN 0950-5849, E-ISSN 1873-6025, Vol. 152, article id 107062Article, review/survey (Refereed)
    Abstract [en]

    Context: GUI-based testing is a sub-field of software testing research that has emerged in the last three decades. GUI-based testing techniques focus on verifying the functional conformance of the system under test (SUT) through its graphical user interface. However, despite the research domains growth, studies in the field have low reproducibility and comparability. One observed cause of these phenomena is identified as a lack of research rigor and commonly used metrics, including coverage metrics. Objective: We aim to identify the most commonly used metrics in the field and formulate a taxonomy of coverage metrics for GUI-based testing research. Method: We adopt an evidence-based approach to build the taxonomy through a systematic literature review of studies in the GUI-based testing domain. Identified papers are then analyzed with Open and Axial Coding techniques to identify hierarchical and mutually exclusive categories of metrics with common characteristics, usages, and applications. Results: Through the analysis of 169 papers and 315 metric definitions, we obtained a taxonomy with 55 codes (common names for metrics), 17 metric categories, and 4 higher level categories: Functional Level, GUI Level, Model Level and Code Level. We measure a higher number of mentions of Model and Code level metrics over Functional and GUI level metrics. Conclusions: We propose a taxonomy for use in future GUI-based testing research to improve the general quality of studies in the domain. In addition, the taxonomy is perceived to help enable more replication studies as well as macro-analysis of the current body of research. © 2022 Elsevier B.V.

  • 20.
    Cury, Otávio
    et al.
    Federal University of Piauí, Brazil.
    Avelino, Guilherme
    Federal University of Piauí, Brazil.
    Neto, Pedro Santos
    Federal University of Piauí, Brazil.
    Valente, Marco Túlio
    Federal University of Minas Gerais, Brazil.
    Britto, Ricardo
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Source code expert identification: Models and application2024In: Information and Software Technology, ISSN 0950-5849, E-ISSN 1873-6025, Vol. 170, article id 107445Article, review/survey (Refereed)
    Abstract [en]

    Context: Identifying source code expertise is useful in several situations. Activities like bug fixing and helping newcomers are best performed by knowledgeable developers. Some studies have proposed repository-mining techniques to identify source code experts. However, there is a gap in understanding which variables are most related to code knowledge and how they can be used for identifying expertise. Objective: This study explores models of expertise identification and how these models can be used to improve a Truck Factor algorithm. Methods: First, we built an oracle with the knowledge of developers from software projects. Then, we use this oracle to analyze the correlation between measures from the development history and source code knowledge. We investigate the use of linear and machine-learning models to identify file experts. Finally, we use the proposed models to improve a Truck Factor algorithm and analyze their performance using data from public and private repositories. Results: First Authorship and Recency of Modification have the highest positive and negative correlations with source code knowledge, respectively. Machine learning classifiers outperformed the linear techniques (F-Score = 71% to 73%) in the largest analyzed dataset, but this advantage is unclear in the smallest one. The Truck Factor algorithm using the proposed models could handle developers missed by the previous expertise model with the best average F-Score of 74%. It was perceived as more accurate in computing the Truck Factor of an industrial project. Conclusion: If we analyze F-Score, the studied models have similar performance. However, machine learning classifiers get higher Precision while linear models obtained the highest Recall. Therefore, choosing the best technique depends on the user's tolerance to false positives and negatives. Additionally, the proposed models significantly improved the accuracy of a Truck Factor algorithm, affirming their effectiveness in precisely identifying the key developers within software projects. © 2024 Elsevier B.V.

  • 21. Dittrich, Yvonne
    et al.
    Lindeberg, Olle
    How use-oriented development can take place2004In: Information and Software Technology, ISSN 0950-5849, E-ISSN 1873-6025, Vol. 46, no 9, p. 603-617Article in journal (Refereed)
    Abstract [en]

    Usability is still a problem for software development. As the introduced software changes the use context, use qualities cannot be fully anticipated. Close co-operation between users and developers during development has been proposed as a remedy. Others fear such involvement of users as it might jeopardize planning and control. Based on the observation of an industrial project, we show how user participation and control can be achieved at the same time. The present article discusses the specific measures that allowed for co-operation between users and developers in an industrial context. It indicates measures to improve software development by focusing on use-orientation, i.e. allowing for user-developer co-operation.

  • 22.
    Felderer, Michael
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Galster, Matthias
    University of Canterbury, NZL.
    Izurieta, Clemente
    Montana State University, USA.
    Seaman, Carolyn
    University of Maryland, USA.
    Introduction to the Special Issue on value and waste in software engineering2022In: Information and Software Technology, ISSN 0950-5849, E-ISSN 1873-6025, Vol. 144, article id 106801Article in journal (Refereed)
    Abstract [en]

    In the context of software engineering, “value” and “waste” can mean different things to different stakeholders. While traditionally value and waste have been considered from a business or economic point of view, there has been a trend in recent years towards a broader perspective that also includes wider human and societal values. This Special Issue explores value and waste aspects in all areas of software engineering, including identifying, quantifying, reasoning about, and representing value and waste, driving value and avoiding waste, and managing value and waste. In this editorial we provide an introduction to the topic and provide an overview of the contributions included in this Special Issue. © 2021

  • 23.
    Felderer, Michael
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Pfahl, Dietmar
    University of Tartu, EST.
    Special Section: Automation and Analytics for Greener Software Engineering2018In: Information and Software Technology, ISSN 0950-5849, E-ISSN 1873-6025, Vol. 95, p. 106-107Article in journal (Other academic)
  • 24.
    Frattini, Julian
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Fucci, Davide
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Mendez, Daniel
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Spinola, Rodrigo
    Virginia Commonwealth University, Richmond, USA.
    Mandic, Vladimir
    University of Novi Sad, Serbia.
    Tausan, Nebojsa
    University of Novi Sad, Serbia.
    Ahmad, Ovais
    Karlstad University.
    Gonzalez-Huerta, Javier
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    An initial Theory to Understand and Manage Requirements Engineering Debt in Practice2023In: Information and Software Technology, ISSN 0950-5849, E-ISSN 1873-6025, Vol. 159, article id 107201Article in journal (Refereed)
    Abstract [en]

    Context

    Advances in technical debt research demonstrate the benefits of applying the financial debt metaphor to support decision-making in software development activities. Although decision-making during requirements engineering has significant consequences, the debt metaphor in requirements engineering is inadequately explored.

    Objective

    We aim to conceptualize how the debt metaphor applies to requirements engineering by organizing concepts related to practitioners’ understanding and managing of requirements engineering debt (RED).

    Method

    We conducted two in-depth expert interviews to identify key requirements engineering debt concepts and construct a survey instrument. We surveyed 69 practitioners worldwide regarding their perception of the concepts and developed an initial analytical theory.

    Results

    We propose a RED theory that aligns key concepts from technical debt research but emphasizes the specific nature of requirements engineering. In particular, the theory consists of 23 falsifiable propositions derived from the literature, the interviews, and survey results.

    Conclusions

    The concepts of requirements engineering debt are perceived to be similar to their technical debt counterpart. Nevertheless, measuring and tracking requirements engineering debt are immature in practice. Our proposed theory serves as the first guide toward further research in this area.

    Download full text (pdf)
    IST22_RED
  • 25.
    Garousi, Vahid
    et al.
    Queen's University Belfast, GBR.
    Bauer, Sara
    University of Innsbruck, AUT.
    Felderer, Michael
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    NLP-assisted software testing: a systematic mapping of the literature2020In: Information and Software Technology, ISSN 0950-5849, E-ISSN 1873-6025, Vol. 126, article id 106321Article, review/survey (Refereed)
    Abstract [en]

    Context: To reduce manual effort of extracting test cases from natural-language requirements, many approaches based on Natural Language Processing (NLP) have been proposed in the literature. Given the large amount of approaches in this area, and since many practitioners are eager to utilize such techniques, it is important to synthesize and provide an overview of the state-of-the-art in this area. Objective: Our objective is to summarize the state-of-the-art in NLP-assisted software testing which could benefit practitioners to potentially utilize those NLP-based techniques. Moreover, this can benefit researchers in providing an overview of the research landscape. Method: To address the above need, we conducted a survey in the form of a systematic literature mapping (classification). After compiling an initial pool of 95 papers, we conducted a systematic voting, and our final pool included 67 technical papers. Results: This review paper provides an overview of the contribution types presented in the papers, types of NLP approaches used to assist software testing, types of required input requirements, and a review of tool support in this area. Some key results we have detected are: (1) only four of the 38 tools (11%) presented in the papers are available for download; (2) a larger ratio of the papers (30 of 67) provided a shallow exposure to the NLP aspects (almost no details). Conclusion: This paper would benefit both practitioners and researchers by serving as an “index” to the body of knowledge in this area. The results could help practitioners utilizing the existing NLP-based techniques; this in turn reduces the cost of test-case design and decreases the amount of human resources spent on test activities. After sharing this review with some of our industrial collaborators, initial insights show that this review can indeed be useful and beneficial to practitioners. © 2020 Elsevier B.V.

  • 26.
    Garousi, Vahid
    et al.
    Wageningen University, NLD.
    Felderer, Michael
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering. Blekinge Institute of Technology.
    Karapıçak, Çağrı Murat
    Middle East Technical, University (METU), Ankara, TUR.
    Yılmaz, Uğur
    Hacettepe University, Ankara, TUR.
    Testing embedded software: A survey of the literature2018In: Information and Software Technology, ISSN 0950-5849, E-ISSN 1873-6025, Vol. 104, p. 14-45Article in journal (Refereed)
    Abstract [en]

    Context Embedded systems have overwhelming penetration around the world. Innovations are increasingly triggered by software embedded in automotive, transportation, medical-equipment, communication, energy, and many other types of systems. To test embedded software in an effective and efficient manner, a large number of test techniques, approaches, tools and frameworks have been proposed by both practitioners and researchers in the last several decades. Objective: However, reviewing and getting an overview of the entire state-of-the-art and the practice in this area is challenging for a practitioner or a (new) researcher. Also unfortunately, as a result, we often see that many companies reinvent the wheel (by designing a test approach new to them, but existing in the domain) due to not having an adequate overview of what already exists in this area. Method: To address the above need, we conducted and report in this paper a systematic literature review (SLR) in the form of a systematic literature mapping (SLM) in this area. After compiling an initial pool of 588 papers, a systematic voting about inclusion/exclusion of the papers was conducted among the authors, and our final pool included 312 technical papers. Results: Among the various aspects that we aim at covering, our review covers the types of testing topics studied, types of testing activity, types of test artifacts generated (e.g., test inputs or test code), and the types of industries in which studies have focused on, e.g., automotive and home appliances. Furthermore, we assess the benefits of this review by asking several active test engineers in the Turkish embedded software industry to review its findings and provide feedbacks as to how this review has benefitted them. Conclusion: The results of this review paper have already benefitted several of our industry partners in choosing the right test techniques / approaches for their embedded software testing challenges. We believe that it will also be useful for the large world-wide community of software engineers and testers in the embedded software industry, by serving as an "index" to the vast body of knowledge in this important area. Our results will also benefit researchers in observing the latest trends in this area and for identifying the topics which need further investigations.

    Download full text (pdf)
    fulltext
  • 27.
    Garousi, Vahid
    et al.
    Wageningen University, NLD.
    Felderer, Michael
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Mäntylä, Mika
    University of Oulu, FIN.
    Guidelines for including grey literature and conducting multivocal literature reviews in software engineering2019In: Information and Software Technology, ISSN 0950-5849, E-ISSN 1873-6025, Vol. 106, p. 101-121Article in journal (Refereed)
    Abstract [en]

    Context: A Multivocal Literature Review (MLR) is a form of a Systematic Literature Review (SLR) which includes the grey literature (e.g., blog posts, videos and white papers) in addition to the published (formal) literature (e.g., journal and conference papers). MLRs are useful for both researchers and practitioners since they provide summaries both the state-of-the art and –practice in a given area. MLRs are popular in other fields and have recently started to appear in software engineering (SE). As more MLR studies are conducted and reported, it is important to have a set of guidelines to ensure high quality of MLR processes and their results. Objective: There are several guidelines to conduct SLR studies in SE. However, several phases of MLRs differ from those of traditional SLRs, for instance with respect to the search process and source quality assessment. Therefore, SLR guidelines are only partially useful for conducting MLR studies. Our goal in this paper is to present guidelines on how to conduct MLR studies in SE. Method: To develop the MLR guidelines, we benefit from several inputs: (1) existing SLR guidelines in SE, (2), a literature survey of MLR guidelines and experience papers in other fields, and (3) our own experiences in conducting several MLRs in SE. We took the popular SLR guidelines of Kitchenham and Charters as the baseline and extended/adopted them to conduct MLR studies in SE. All derived guidelines are discussed in the context of an already-published MLR in SE as the running example. Results: The resulting guidelines cover all phases of conducting and reporting MLRs in SE from the planning phase, over conducting the review to the final reporting of the review. In particular, we believe that incorporating and adopting a vast set of experience-based recommendations from MLR guidelines and experience papers in other fields have enabled us to propose a set of guidelines with solid foundations. Conclusion: Having been developed on the basis of several types of experience and evidence, the provided MLR guidelines will support researchers to effectively and efficiently conduct new MLRs in any area of SE. The authors recommend the researchers to utilize these guidelines in their MLR studies and then share their lessons learned and experiences. © 2018

    Download full text (pdf)
    fulltext
  • 28.
    Garousi, Vahid
    et al.
    Wageningen University, NLD.
    Felderer, Michael
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering. Blekinge Institute of Technology.
    Nur Kılıçaslan, Feyza Nur
    Hacettepe Üniversitesi, TUR.
    A survey on software testability2019In: Information and Software Technology, ISSN 0950-5849, E-ISSN 1873-6025, Vol. 108, p. 35-64Article in journal (Refereed)
    Abstract [en]

    Context: Software testability is the degree to which a software system or a unit under test supports its own testing. To predict and improve software testability, a large number of techniques and metrics have been proposed by both practitioners and researchers in the last several decades. Reviewing and getting an overview of the entire state-of-the-art and state-of-the-practice in this area is often challenging for a practitioner or a new researcher. Objective: Our objective is to summarize the body of knowledge in this area and to benefit the readers (both practitioners and researchers) in preparing, measuring and improving software testability. Method: To address the above need, the authors conducted a survey in the form of a systematic literature mapping (classification) to find out what we as a community know about this topic. After compiling an initial pool of 303 papers, and applying a set of inclusion/exclusion criteria, our final pool included 208 papers (published between 1982 and 2017). Results: The area of software testability has been comprehensively studied by researchers and practitioners. Approaches for measurement of testability and improvement of testability are the most-frequently addressed in the papers. The two most often mentioned factors affecting testability are observability and controllability. Common ways to improve testability are testability transformation, improving observability, adding assertions, and improving controllability.Conclusion: This paper serves for both researchers and practitioners as an "index" to the vast body of knowledge in the area of testability. The results could help practitioners measure and improve software testability in their projects. To assess potential benefits of this review paper, we shared its draft version with two of our industrial collaborators. They stated that they found the review useful and beneficial in their testing activities. Our results can also benefit researchers in observing the trends in this area and identify the topics that require further investigation.

  • 29.
    Garousi, Vahid
    et al.
    Hacettepe University, Turkey.
    Petersen, Kai
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Ozkan, Baris
    Atilim University, Turkey.
    Challenges and best practices in industry-academia collaborations in software engineering: A systematic literature review2016In: Information and Software Technology, ISSN 0950-5849, E-ISSN 1873-6025, Vol. 79, p. 106-127Article in journal (Refereed)
    Abstract [en]

    Context: The global software industry and the software engineering (SE) academia are two large communities. However, unfortunately, the level of joint industry-academia collaborations in SE is still relatively very low, compared to the amount of activity in each of the two communities. It seems that the two ’camps’ show only limited interest/motivation to collaborate with one other. Many researchers and practitioners have written about the challenges, success patterns (what to do, i.e., how to collaborate) and anti-patterns (what not do do) for industry-academia collaborations. Objective: To identify (a) the challenges to avoid risks to the collaboration by being aware of the challenges, (b) the best practices to provide an inventory of practices (patterns) allowing for an informed choice of practices to use when planning and conducting collaborative projects. Method: A systematic review has been conducted. Synthesis has been done using grounded-theory based coding procedures. Results: Through thematic analysis we identified 10 challenge themes and 17 best practice themes. A key outcome was the inventory of best practices, the most common ones recommended in different contexts were to hold regular workshops and seminars with industry, assure continuous learning from industry and academic sides, ensure management engagement, the need for a champion, basing research on real-world problems, showing explicit benefits to the industry partner, be agile during the collaboration, and the co-location of the researcher on the industry side. Conclusion: Given the importance of industry-academia collaboration to conduct research of high practical relevance we provide a synthesis of challenges and best practices, which can be used by researchers and practitioners to make informed decisions on how to structure their collaborations.

  • 30.
    Garousi, Vahid
    et al.
    Queen's University Belfast, GBR.
    Rainer, Austen
    Queen's University Belfast, GBR.
    Felderer, Michael
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Mäntylä, Mika V.
    University of Oulu, FIN.
    Introduction to the Special Issue on: Grey Literature and Multivocal Literature Reviews (MLRs) in software engineering2022In: Information and Software Technology, ISSN 0950-5849, E-ISSN 1873-6025, Vol. 141, article id 106697Article in journal (Refereed)
    Abstract [en]

    In parallel to academic (peer-reviewed) literature (e.g., journal and conference papers), an enormous extent of grey literature (GL) has accumulated since the inception of software engineering (SE). GL is often defined as “literature that is not formally published in sources such as books or journal articles”, e.g., in the form of trade magazines, online blog-posts, technical reports, and online videos such as tutorial and presentation videos. GL is typically produced by SE practitioners. We have observed that researchers are increasingly using and benefitting from the knowledge available within GL. Related to the notion of GL is the notion of Multivocal Literature Reviews (MLRs) in SE, i.e., a MLR is a form of a Systematic Literature Review (SLR) which includes knowledge and/or evidence from the GL in addition to the peer-reviewed literature. MLRs are useful for both researchers and practitioners because they provide summaries of both the state-of-the-art and -practice in a given area. MLRs are popular in other fields and have started to appear in SE community. It is timely then for a Special Issue (SI) focusing on GL and MLRs in SE. From the pool of 13 submitted papers, and after following a rigorous peer review process, seven papers were accepted for this SI. In this introduction we provide a brief overview of GL and MLRs in SE, and then a brief summary of the seven papers published in this SI. © 2021

  • 31. Gorschek, Tony
    et al.
    Davis, Alan
    Requirements Engineering: In Search of the Dependent Variables2008In: Information and Software Technology, ISSN 0950-5849, E-ISSN 1873-6025, Vol. 50, no 1-2, p. 67-75Article in journal (Refereed)
    Abstract [en]

    When software development teams modify their requirements engineering process as an independent variable, they often examine the implications of these process changes by assessing the quality of the products of the requirements engineering process, e.g., a software requirements specification (SRS). Using the quality of the SRS as the dependent variable is flawed. As an alternative, this paper presents a framework of dependent variables that serves as a full range for requirements engineering quality assessment. In this framework, the quality of the SRS itself is just the first level. Other higher, and more significant levels, include whether the project was successful and whether the resulting product was successful. And still higher levels include whether or not the company was successful and whether there was a positive or negative impact on society as a whole. © 2007 Elsevier B.V. All rights reserved.

    Download full text (pdf)
    FULLTEXT01
  • 32.
    Guimarães, Gleyser
    et al.
    Federal University of Campina Grande, Brazil.
    Costa, Icaro
    VIRTUS Research, Development, and Innovation Center, Brazil.
    Perkusich, Mirko
    VIRTUS Research, Development, and Innovation Center, Brazil.
    Mendes, Emilia
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Santos, Danilo
    Federal University of Campina Grande, Brazil.
    Almeida, Hyggo
    Federal University of Campina Grande, Brazil.
    Perkusich, Angelo
    Federal University of Campina Grande, Brazil.
    Investigating the relationship between personalities and agile team climate: A replicated study2024In: Information and Software Technology, ISSN 0950-5849, E-ISSN 1873-6025, Vol. 169, article id 107407Article in journal (Refereed)
    Abstract [en]

    Context: A study in 2020 (S1) explored the relationship between personality traits and team climate perceptions of software professionals working in agile teams. S1 surveyed 43 software professionals from a large telecom company in Sweden and found that a person's ability to get along with team members (Agreeableness) influences significantly and positively the perceived level of team climate. Further, they observed that personality traits accounted for less than 15 % of the variance in team climate. Objective: The study described herein replicates S1 using data gathered from 148 software professionals from an industrial partner in Brazil. Method: We used the same research methods as S1. We employed a survey to gather the personality and climate data, which was later analyzed using correlation and regression analyses. The former aimed to measure the level of association between personality traits and climate and the latter to estimate team climate factors using personality traits as predictors. Results: The results for the correlation analyses showed statistically significant and positive associations between two personality traits - Agreeableness and Conscientiousness, and all five team climate factors. There was also a significant and positive association between Openness and Team Vision. Our results corroborate those from S1, with respect to two personality traits – Openness and Agreeableness; however, in S1, Openness was significantly and positively associated with Support for Innovation (not Team Vision). In regard to Agreeableness, in S1 it was also significantly and positively associated with perceived team climate. Furthermore, our regression models also support S1’s findings - personality traits accounted for less than 15 % of the variance in team climate. Conclusion: Despite variances in location, sample size, and operational domain, our study confirmed S1′s results on the limited influence of personality traits. Agreeableness and Openness were significant predictors for team climate, although the predictive factors differed. These discrepancies highlight the necessity for further research, incorporating larger samples and additional predictor variables, to better comprehend the intricate relationship between personality traits and team climate across diverse cultural and professional settings. © 2024 Elsevier B.V.

  • 33. Holt, Nina Elisabeth
    et al.
    Briand, Lionel
    Torkar, Richard
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Empirical evaluations on the cost-effectiveness of state-based testing: An industrial case study2014In: Information and Software Technology, ISSN 0950-5849, E-ISSN 1873-6025, Vol. 56, no 8, p. 890-910Article in journal (Refereed)
    Abstract [en]

    Context Test models describe the expected behavior of the software under test and provide the basis for test case and oracle generation. When test models are expressed as UML state machines, this is typically referred to as state-based testing (SBT). Despite the importance of being systematic while testing, all testing activities are limited by resource constraints. Thus, reducing the cost of testing while ensuring sufficient fault detection is a common goal in software development. No rigorous industrial case studies of SBT have yet been published. Objective In this paper, we evaluate the cost-effectiveness of SBT on actual control software by studying the combined influence of four testing aspects: coverage criterion, test oracle, test model and unspecified behavior (sneak paths). Method An industrial case study was used to investigate the cost-effectiveness of SBT. To enable the evaluation of SBT techniques, a model-based testing tool was configured and used to automatically generate test suites. The test suites were evaluated using 26 real faults collected in a field study. Results Results show that the more detailed and rigorous the test model and oracle, the higher the fault-detection ability of SBT. A less precise oracle achieved 67% fault detection, but the overall cost reduction of 13% was not enough to make the loss an acceptable trade-off. Removing details from the test model significantly reduced the cost by 85%. Interestingly, only a 24–37% reduction in fault detection was observed. Testing for sneak paths killed the remaining eleven mutants that could not be killed by the conformance test strategies. Conclusions Each of the studied testing aspects influences cost-effectiveness and must be carefully considered in context when selecting strategies. Regardless of these choices, sneak-path testing is a necessary step in SBT since sneak paths are common while also undetectable by conformance testing.

  • 34.
    Iftikhar, Umar
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Ali, Nauman bin
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Börstler, Jürgen
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Usman, Muhammad
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    A tertiary study on links between source code metrics and external quality attributes2024In: Information and Software Technology, ISSN 0950-5849, E-ISSN 1873-6025, Vol. 165, article id 107348Article, review/survey (Refereed)
    Abstract [en]

    Context: Several secondary studies have investigated the relationship between internal quality attributes, source code metrics and external quality attributes. Sometimes they have contradictory results. Objective: We synthesize evidence of the link between internal quality attributes, source code metrics and external quality attributes along with the efficacy of the prediction models used. Method: We conducted a tertiary review to identify, evaluate and synthesize secondary studies. We used several characteristics of secondary studies as indicators for the strength of evidence and considered them when synthesizing the results. Results: From 711 secondary studies, we identified 15 secondary studies that have investigated the link between source code and external quality. Our results show : (1) primarily, the focus has been on object-oriented systems, (2) maintainability and reliability are most often linked to internal quality attributes and source code metrics, with only one secondary study reporting evidence for security, (3) only a small set of complexity, coupling, and size-related source code metrics report a consistent positive link with maintainability and reliability, and (4) group method of data handling (GMDH) based prediction models have performed better than other prediction models for maintainability prediction. Conclusions: Based on our results, lines of code, coupling, complexity and the cohesion metrics from Chidamber & Kemerer (CK) metrics are good indicators of maintainability with consistent evidence from high and moderate-quality secondary studies. Similarly, four CK metrics related to coupling, complexity and cohesion are good indicators of reliability, while inheritance and certain cohesion metrics show no consistent evidence of links to maintainability and reliability. Further empirical studies are needed to explore the link between internal quality attributes, source code metrics and other external quality attributes, including functionality, portability, and usability. The results will help researchers and practitioners understand the body of knowledge on the subject and identify future research directions. © 2023 The Author(s)

    Download full text (pdf)
    fulltext
  • 35.
    Irshad, Mohsin
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering. Ericsson AB, Sweden.
    Börstler, Jürgen
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Petersen, Kai
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Supporting Refactoring of BDD Specifications - An Empirical Study2022In: Information and Software Technology, ISSN 0950-5849, E-ISSN 1873-6025, Vol. 141, article id 106717Article in journal (Refereed)
    Abstract [en]

    Context: Behavior-driven development (BDD) is a variant of test-driven development where specifications are described in a structured domain-specific natural language. Although refactoring is a crucial activity of BDD, little research is available on the topic.

    Objective: To support practitioners in refactoring BDD specifications by (1) proposing semi-automated approaches to identify refactoring candidates; (2) defining refactoring techniques for BDD specifications; and (3) evaluating the proposed identification approaches in an industry context.

    Method: Using Action Research, we have developed an approach for identifying refactoring candidates in BDD specifications based on two measures of similarity and applied the approach in two projects of a large software organization. The accuracy of the measures for identifying refactoring candidates was then evaluated against an approach based on machine learning and a manual approach based on practitioner perception.

    Results: We proposed two measures of similarity to support the identification of refactoring candidates in a BDD specification base; (1) normalized compression similarity (NCS) and (2) similarity ratio (SR). A semi-automated approach based on NCS and SR was developed and applied to two industrial cases to identify refactoring candidates. Our results show that our approach can identify candidates for refactoring 6o times faster than a manual approach. Our results furthermore showed that our measures accurately identified refactoring candidates compared with a manual identification by software practitioners and outperformed an ML-based text classification approach. We also described four types of refactoring techniques applicable to BDD specifications; merging candidates, restructuring candidates, deleting duplicates, and renaming specification titles. 

    Conclusion: Our results show that NCS and SR can help practitioners in accurately identifying BDD specifications that are suitable candidates for refactoring, which also decreases the time for identifying refactoring candidates.

    Download full text (pdf)
    fulltext
  • 36.
    Irshad, Mohsin
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Petersen, Kai
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Poulding, Simon
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    A systematic literature review of software requirements reuse approaches2018In: Information and Software Technology, ISSN 0950-5849, E-ISSN 1873-6025, Vol. 93, no Jan, p. 223-245Article, review/survey (Refereed)
    Abstract [en]

    Context: Early software reuse is considered as the most beneficial form of software reuse. Hence, previous research has focused on supporting the reuse of software requirements. Objective: This study aims to identify and investigate the current state of the art with respect to (a) what requirement reuse approaches have been proposed, (b) the methods used to evaluate the approaches, (c) the characteristics of the approaches, and (d) the quality of empirical studies on requirements reuse with respect to rigor and relevance. Method: We conducted a systematic review and a combination of snowball sampling and database search have been used to identify the studies. The rigor and relevance scoring rubric has been used to assess the quality of the empirical studies. Multiple researchers have been involved in each step to increase the reliability of the study. Results: Sixty-nine studies were identified that describe requirements reuse approaches. The majority of the approaches used structuring and matching of requirements as a method to support requirements reuse and text-based artefacts were commonly used as an input to these approaches. Further evaluation of the studies revealed that the majority of the approaches are not validated in the industry. The subset of empirical studies (22 in total) was analyzed for rigor and relevance and two studies achieved the maximum score for rigor and relevance based on the rubric. It was found that mostly text-based requirements reuse approaches were validated in the industry. Conclusion: From the review, it was found that a number of approaches already exist in literature, but many approaches are not validated in industry. The evaluation of rigor and relevance of empirical studies show that these do not contain details of context, validity threats, and the industrial settings, thus highlighting the need for the industrial evaluation of the approaches. © 2017 Elsevier B.V.

  • 37.
    Jabangwe, Ronald
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering. Lero / Regulated Software Research Centre.
    Šmite, Darja
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Hessbo, Emil
    Distributed Software Development in an Offshore Outsourcing Project: A Case Study of Source Code Evolution and Quality2016In: Information and Software Technology, ISSN 0950-5849, E-ISSN 1873-6025, Vol. 72, p. 125-136Article in journal (Refereed)
    Abstract [en]

    Context: Offshore outsourcing collaborations can result in distributed development, which has been linked to quality-related concerns. However, there are few studies that focus on the implication of distributed development on quality, and they report inconsistent findings using different proxies for quality. Thus, there is a need for more studies, as well as to identify useful proxies for certain distributed contexts. The presented empirical study was performed in a context that involved offshore outsourcing vendors in a multisite distributed development setting.

    Objective: The aim of the study is to investigate how quality changes during evolution in a distributed development environment that incurs organizational changes in terms of number of companies involved.

    Method: A case study approach is followed in the investigation. Only post-release defects are used as a proxy for external quality due to unreliable defect data found pre-release such as those reported during integration. Focus group meetings were also held with practitioners.

    Results: The results suggest that practices that can be grouped into product, people, and process categories can help ensure post-release quality. However, post-release defects are insufficient for showing a conclusive impact on quality of the development setting. This is because the development teams worked independently as isolated distributed teams, and integration defects would help to better reflect on the impact on quality of the development setting.

    Conclusions: The mitigation practices identified can be useful information to practitioners that are planning to engage in similar globally distributed development projects. Finally, it is important to take into consideration the arrangement of distributed development teams in global projects, and to use the context to identify appropriate proxies for quality in order to draw correct conclusions about the implications of the context. This would help with providing practitioners with well-founded findings about the impact on quality of globally distributed development settings.

    Download full text (pdf)
    fulltext
  • 38. Kasoju, Abhinaya
    et al.
    Petersen, Kai
    Blekinge Institute of Technology, School of Computing.
    Mäntylä, Mika V.
    Analyzing an automotive testing process with evidence-based software engineering2013In: Information and Software Technology, ISSN 0950-5849, E-ISSN 1873-6025, Vol. 55, no 7, p. 1237-1259Article in journal (Refereed)
    Abstract [en]

    Evidence-based software engineering (EBSE) provides a process for solving practical problems based on a rigorous research approach. The primary focus so far was on mapping and aggregating evidence through systematic reviews. Objectives: We extend existing work on evidence-based software engineering by using the EBSE process in an industrial case to help an organization to improve its automotive testing process. With this we contribute in (1) providing experiences on using evidence based processes to analyze a real world automotive test process and (2) provide evidence of challenges and related solutions for automotive software testing processes. Methods: In this study we perform an in-depth investigation of an automotive test process using an extended EBSE process including case study research (gain an understanding of practical questions to define a research scope), systematic literature review (identify solutions through systematic literature), and value stream mapping (map out an improved automotive test process based on the current situation and improvement suggestions identified). These are followed by reflections on the EBSE process used. Results: In the first step of the EBSE process we identified 10 challenge areas with a total of 26 individual challenges. For 15 out of those 26 challenges our domain specific systematic literature review identified solutions. Based on the input from the challenges and the solutions, we created a value stream map of the current and future process. Conclusions: Overall, we found that the evidence-based process as presented in this study helps in technology transfer of research results to industry, but at the same time some challenges lie ahead (e.g. scoping systematic reviews to focus more on concrete industry problems, and understanding strategies of conducting EBSE with respect to effort and quality of the evidence).

  • 39. Khurum, Mahvish
    et al.
    Fricker, Samuel
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Gorschek, Tony
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    The Contextual Nature of Innovation: An Empirical Investigation of Three Software Intensive Products2015In: Information and Software Technology, ISSN 0950-5849, E-ISSN 1873-6025, Vol. 57, no 1Article in journal (Refereed)
    Abstract [en]

    Context: New products create significant opportunities for differentiation and competitive advantage. To increase the chances of new product success, a universal set of critical activities and determinants have been recommended. Some researchers believe, however, that these factors are not universal, but are contextual. Objective: This paper reports innovation processes followed to develop three software intensive products for understanding how and why innovation practice is dependent on innovation context. Method: This paper reports innovation processes and practices with an in-depth multi-case study of three software product innovations from Ericsson, IBM, and Rorotika. It describes the actual innovation processes followed in the three cases and discusses the observed innovation practice and relates it to state-of-the-art. Results: The cases point to a set of contextual factors that influence the choice of innovation activities and determinants for developing successful product innovations. The cases provide evidence that innovation practice cannot be standardized, but is contextual in nature. Conclusion: The rich description of the interaction between context and innovation practice enables future investigations into contextual elements that influence innovation practice, and calls for the creation of frameworks enabling activity and determinant selection for a given context – since one size does not fit all.

    Download full text (pdf)
    fulltext
  • 40.
    Kosenkov, Oleksandr
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Elahidoost, Parisa
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Gorschek, Tony
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Fischbach, Jannik
    Fortiss GmbH, Germany.
    Mendez, Daniel
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Unterkalmsteiner, Michael
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Fucci, Davide
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Mohanani, Rahul
    University of Jyväskylä, Finland.
    Systematic mapping study on requirements engineering for regulatory compliance of software systems2025In: Information and Software Technology, ISSN 0950-5849, E-ISSN 1873-6025, Vol. 178, article id 107622Article, review/survey (Refereed)
    Abstract [en]

    Context: As the diversity and complexity of regulations affecting Software-Intensive Products and Services (SIPS) is increasing, software engineers need to address the growing regulatory scrutiny. We argue that, as with any other non-negotiable requirements, SIPS compliance should be addressed early in SIPS engineering—i.e., during requirements engineering (RE).

    Objectives: In the conditions of the expanding regulatory landscape, existing research offers scattered insights into regulatory compliance of SIPS. This study addresses the pressing need for a structured overview of the state of the art in software RE and its contribution to regulatory compliance of SIPS.

    Method: We conducted a systematic mapping study to provide an overview of the current state of research regarding challenges, principles, and practices for regulatory compliance of SIPS related to RE. We focused on the role of RE and its contribution to other SIPS lifecycle process areas. We retrieved 6914 studies published from 2017 (January 1) until 2023 (December 31) from four academic databases, which we filtered down to 280 relevant primary studies.

    Results: We identified and categorized the RE-related challenges in regulatory compliance of SIPS and their potential connection to six types of principles and practices addressing challenges. We found that about 13.6% of the primary studies considered the involvement of both software engineers and legal experts in developing principles and practices. About 20.7% of primary studies considered RE in connection to other process areas. Most primary studies focused on a few popular regulation fields (privacy, quality) and application domains (healthcare, software development, avionics). Our results suggest that there can be differences in terms of challenges and involvement of stakeholders across different fields of regulation.

    Conclusion: Our findings highlight the need for an in-depth investigation of stakeholders’ roles, relationships between process areas, and specific challenges for distinct regulatory fields to guide research and practice. 

    Download full text (pdf)
    fulltext
  • 41. Kosti, Makrina Viola
    et al.
    Feldt, Robert
    Blekinge Institute of Technology, School of Computing.
    Angelis, Lefteris
    Personality, emotional intelligence and work preferences in software engineering: An empirical study2014In: Information and Software Technology, ISSN 0950-5849, E-ISSN 1873-6025, Vol. 56, no 8, p. 973-990Article in journal (Refereed)
    Abstract [en]

    Context: There is an increasing awareness among Software Engineering (SE) researchers and practitioners that more focus is needed on understanding the engineers developing software. Previous studies show significant associations between the personalities of software engineers and their work preferences. Objective: Various studies on personality in SE have found large, small or no effects and there is no consensus on the importance of psychometric measurements in SE. There is also a lack of studies employing other psychometric instruments or using larger datasets. We aim to evaluate our results in a larger sample, with software engineers in an earlier state of their career, using advanced statistics. Method: An operational replication study where extensive psychometric data from 279 master level students have been collected in a SE program at a Swedish University. Personality data based on the Five-Factor Model, Trait Emotional Intelligence Questionnaire and Self-compassion have been collected. Statistical analysis investigated associations between psychometrics and work preferences and the results were compared to our previous findings from 47 SE professionals. Results: Analysis confirms existence of two main clusters of software engineers; one with more "intense" personalities than the other. This corroborates our earlier results on SE professionals. The student data also show similar associations between personalities and work preferences. However, for other associations there are differences due to the different population of subjects. We also found connections between the emotional intelligence and work preferences, while no associations were found for self-compassion. Conclusion: The associations can help managers to predict and adapt projects and tasks to available staff. The results also show that the Emotional Intelligence instrument can be predictive. The research methods and analytical tools we employ can detect subtle associations and reflect differences between different groups and populations and thus can be important tools for future research as well as industrial practice.

    Download full text (pdf)
    FULLTEXT01
  • 42. Kuzniarz, Ludwik
    et al.
    Angelis, Lefteris
    Empirical extension of a classification framework for addressing consistency in model based development2011In: Information and Software Technology, ISSN 0950-5849, E-ISSN 1873-6025, Vol. 53, no 3, p. 214-229Article in journal (Refereed)
    Abstract [en]

    Context: Consistency constitutes an important aspect in practical realization of modeling ideas in the process of software development and in the related research which is diverse. A classification framework has been developed, in order to aid the model based software construction by categorizing research problems related to consistency. However, the framework does not include information on the importance of classification elements. Objective: The aim was to extend the classification framework with information about the relative importance of the elements constituting the classification. The research question was how to express and obtain this information. Method: A survey was conducted on a sample of 24 stakeholders from academia and industry, with different roles, who answered a quantitative questionnaire. Specifically, the respondents prioritized perspectives and issues using an extended hierarchical voting scheme based on the hundred dollar test. The numerical data obtained were first weighted and normalized and then they were analyzed by descriptive statistics and bar charts. Results: The detailed analysis of the data revealed the relative importance of consistency perspectives and issues under different views, allowing for the desired extension of the classification framework with empirical information. The most highly valued issues come from the pragmatics perspective. These issues are the most important for tool builders and practitioners from industry, while for the responders from academia theory group some issues from the concepts perspective are equally important. Conclusion: The method of using empirical data from a hierarchical cumulative voting scheme for extending existing research classification framework is useful for including information regarding the importance of the classification elements.

    Download full text (pdf)
    FULLTEXT01
  • 43.
    Laiq, Muhammad
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Ali, Nauman bin
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Börstler, Jürgen
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Engström, Emelie
    Lund University.
    A data-driven approach for understanding invalid bug reports: An industrial case study2023In: Information and Software Technology, ISSN 0950-5849, E-ISSN 1873-6025, Vol. 164, article id 107305Article in journal (Refereed)
    Abstract [en]

    Context: Bug reports created during software development and maintenance do not always describe deviations from a system's valid behavior. Such invalid bug reports may consume significant resources and adversely affect the prioritization and resolution of valid bug reports. There is a need to identify preventive actions to reduce the inflow of invalid bug reports. Existing research has shown that manually analyzing invalid bug report descriptions provides cues regarding preventive actions. However, such a manual approach is not cost-effective due to the time required to analyze a sufficiently large number of bug reports needed to identify useful patterns. Furthermore, the analysis needs to be repeated as the underlying causes of invalid bug reports change over time. Objective: In this study, we propose and evaluate the use of Latent Dirichlet Allocation (LDA), a topic modeling approach, to support practitioners in suggesting preventive actions to avoid the creation of similar invalid bug reports in the future. Method: In an industrial case study, we first manually analyzed descriptions of invalid bug reports to identify common patterns in their descriptions. We further investigated to what extent LDA can support this manual process. We used expert-based validation to evaluate the relevance of identified common patterns and their usefulness in suggesting preventive measures. Results: We found that invalid bug reports have common patterns that are perceived as relevant, and they can be used to devise preventive measures. Furthermore, the identification of common patterns can be supported with automation. Conclusion: Using LDA, practitioners can effectively identify representative groups of bug reports (i.e., relevant common patterns) from a large number of bug reports and analyze them further to devise preventive measures. © 2023 The Author(s)

    Download full text (pdf)
    fulltext
  • 44. Lokan, Chris
    et al.
    Mendes, Emilia
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Investigating the use of duration-based moving windows to improve software effort prediction: A replicated study2014In: Information and Software Technology, ISSN 0950-5849, E-ISSN 1873-6025, Vol. 56, no 9, p. 1063-1075Article in journal (Refereed)
    Abstract [en]

    Context: Most research in software effort estimation has not considered chronology when selecting projects for training and testing sets. A chronological split represents the use of a projects starting and completion dates, such that any model that estimates effort for a new project p only uses as training data projects that were completed prior to p's start. Four recent studies investigated the use of chronological splits, using moving windows wherein only the most recent projects completed prior to a projects starting date were used as training data. The first three studies (S1-S3) found some evidence in favor of using windows; they all defined window sizes as being fixed numbers of recent projects. In practice, we suggest that estimators think in terms of elapsed time rather than the size of the data set, when deciding which projects to include in a training set. In the fourth study (S4) we showed that the use of windows based on duration can also improve estimation accuracy. Objective: This papers contribution is to extend S4 using an additional dataset, and to also investigate the effect on accuracy when using moving windows of various durations. Method: Stepwise multivariate regression was used to build prediction models, using all available training data, and also using windows of various durations to select training data. Accuracy was compared based on absolute residuals and MREs; the Wilcoxon test was used to check statistical significances between results. Accuracy was also compared against estimates derived from windows containing fixed numbers of projects. Results: Neither fixed size nor fixed duration windows provided superior estimation accuracy in the new data set. Conclusions: Contrary to intuition, our results suggest that it is not always beneficial to exclude old data when estimating effort for new projects. When windows are helpful, windows based on duration are effective.

  • 45.
    Madeyski, Lech
    et al.
    Wrocław University of Science and Technology, POL.
    Kitchenham, Barbara Ann
    Keele University, GBR.
    Wnuk, Krzysztof
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Introduction to the special section on Enhancing Credibility of Empirical Software Engineering2018In: Information and Software Technology, ISSN 0950-5849, E-ISSN 1873-6025, Vol. 99, p. 118-119Article in journal (Other academic)
  • 46.
    Madeyski, Lech
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Lewowski, Tomasz
    Wroclaw University of Science and Technology, POL.
    Detecting code smells using industry-relevant data2023In: Information and Software Technology, ISSN 0950-5849, E-ISSN 1873-6025, Vol. 155, article id 107112Article in journal (Refereed)
    Abstract [en]

    Context: Code smells are patterns in source code associated with an increased defect rate and a higher maintenance effort than usual, but without a clear definition. Code smells are often detected using rules hard-coded in detection tools. Such rules are often set arbitrarily or derived from data sets tagged by reviewers without the necessary industrial know-how. Conclusions from studying such data sets may be unreliable or even harmful, since algorithms may achieve higher values of performance metrics on them than on models tagged by experts, despite not being industrially useful. Objective: Our goal is to investigate the performance of various machine learning algorithms for automated code smell detection trained on code smell data set(MLCQ) derived from actively developed and industry-relevant projects and reviews performed by experienced software developers. Method: We assign the severity of the smell to the code sample according to a consensus between the severities assigned by the reviewers, use the Matthews Correlation Coefficient (MCC) as our main performance metric to account for the entire confusion matrix, and compare the median value to account for non-normal distributions of performance. We compare 6720 models built using eight machine learning techniques. The entire process is automated and reproducible. Results: Performance of compared techniques depends heavily on analyzed smell. The median value of our performance metric for the best algorithm was 0.81 for Long Method, 0.31 for Feature Envy, 0.51 for Blob, and 0.57 for Data Class. Conclusions: Random Forest and Flexible Discriminant Analysis performed the best overall, but in most cases the performance difference between them and the median algorithm was no more than 10% of the latter. The performance results were stable over multiple iterations. Although the F-score omits one quadrant of the confusion matrix (and thus may differ from MCC), in code smell detection, the actual differences are minimal. © 2022 Elsevier B.V.

  • 47.
    Manzano, Martí
    et al.
    Universitat Politècnica de Catalunya, ESP.
    Ayala, Claudia P.
    Universitat Politècnica de Catalunya, ESP.
    Gómez, Cristina
    Universitat Politècnica de Catalunya, ESP.
    Abherve, Antonin
    Softeam Group, FRA.
    Franch, Xavier
    Universitat Politècnica de Catalunya, ESP.
    Mendes, Emilia
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering. Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    A Method to Estimate Software Strategic Indicators in Software Development: An Industrial Application2021In: Information and Software Technology, ISSN 0950-5849, E-ISSN 1873-6025, Vol. 129, article id 106433Article in journal (Refereed)
    Abstract [en]

    Context: Exploiting software development related data from software-development intensive organizations to support tactical and strategic decision making is a challenge. Combining data-driven approaches with expert knowledge has been highlighted as a sensible approach for leading software-development intensive organizations to rightful decision-making improvements. However, most of the existing proposals lack of important aspects that hinders their industrial uptake such as: customization guidelines to fit the proposals to other contexts and/or automatic or semi-automatic data collection support for putting them forward in a real organization. As a result, existing proposals are rarely used in the industrial context. Objective: Support software-development intensive organizations with guidance and tools for exploiting software development related data and expert knowledge to improve their decision making. Method: We have developed a novel method called SESSI (Specification and Estimation of Software Strategic Indicators) that was articulated from industrial experiences with Nokia, Bittium, Softeam and iTTi in the context of Q-Rapids European project following a design science approach. As part of the industrial summative evaluation, we performed the first case study focused on the application of the method. Results: We detail the phases and steps of the SESSI method and illustrate its application in the development of ModelioNG, a software product of Modeliosoft development firm. Conclusion: The application of the SESSI method in the context of ModelioNG case study has provided us with useful feedback to improve the method and has evidenced that applying the method was feasible in this context. © 2020 Elsevier B.V.

  • 48.
    Marculescu, Bogdan
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Poulding, Simon
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Feldt, Robert
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Petersen, Kai
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Torkar, Richard
    Chalmers, Gothenburg, Sweden.;Univ Gothenburg, Gothenburg, Sweden..
    Tester interactivity makes a difference in search-based software testing: A controlled experiment2016In: Information and Software Technology, ISSN 0950-5849, E-ISSN 1873-6025, Vol. 78, p. 66-82Article in journal (Refereed)
    Abstract [en]

    Context: Search-based software testing promises to provide users with the ability to generate high quality test cases, and hence increase product quality, with a minimal increase in the time and effort required. The development of the Interactive Search-Based Software Testing (ISBST) system was motivated by a previous study to investigate the application of search-based software testing (SBST) in an industrial setting. ISBST allows users to interact with the underlying SBST system, guiding the search and assessing the results. An industrial evaluation indicated that the ISBST system could find test cases that are not created by testers employing manual techniques. The validity of the evaluation was threatened, however, by the low number of participants. Objective: This paper presents a follow-up study, to provide a more rigorous evaluation of the ISBST system. Method: To assess the ISBST system a two-way crossover controlled experiment was conducted with 58 students taking a Verification and Validation course. The NASA Task Load Index (NASA-TLX) is used to assess the workload experienced by the participants in the experiment. Results:The experimental results validated the hypothesis that the ISBST system generates test cases that are not found by the same participants employing manual testing techniques. A follow-up laboratory experiment also investigates the importance of interaction in obtaining the results. In addition to this main result, the subjective workload was assessed for each participant by means of the NASA-TLX tool. The evaluation showed that, while the ISBST system required more effort from the participants, they achieved the same performance. Conclusions: The paper provides evidence that the ISBST system develops test cases that are not found by manual techniques, and that interaction plays an important role in achieving that result. (C) 2016 Elsevier B.V. All rights reserved.

  • 49. Martins, Luiz Eduardo G.
    et al.
    Gorschek, Tony
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Requirements engineering for safety-critical systems: A systematic literature review2016In: Information and Software Technology, ISSN 0950-5849, E-ISSN 1873-6025, Vol. 75, p. 71-89Article in journal (Refereed)
    Abstract [en]

    Context: Safety-Critical Systems (SCS) are becoming increasingly present in our society. A considerable amount of research effort has been invested into improving the SCS requirements engineering process as it is critical to the successful development of SCS and, in particular, the engineering of safety aspects. Objective: This article aims to investigate which approaches have been proposed to elicit, model, specify and validate safety requirements in the context of SCS, as well as to what extent such approaches have been validated in industrial settings. The paper will also investigate how the usability and usefulness of the reported approaches have been explored, and to what extent they enable requirements communication among the development project/team actors in the development of SCS. Method: We conducted a systematic literature review by selecting 151 papers published between 1983 and 2014. The research methodology to conduct the SLR was based on the guidelines proposed by Kitchenham and Biolchini. Results: The results of this systematic review should encourage further research into the design of studies to improve the requirements engineering for SCS, particularly to enable the communication of the safety requirements among the project team actors, and the adoption of other models for hazard and accident models. The presented results point to the need for more industry-oriented studies, particularly with more participation of practitioners in the validation of new approaches. Conclusion: The most relevant findings from this review and their implications for further research are as follows: integration between requirements engineering and safety engineering areas; dominance of the traditional approaches; early mortality of new approaches; need for industry validation; lack of evidence for the usefulness and usability of most approaches; and the lack of studies that investigate how to improve the communication process throughout the lifecycle. Based on the findings, we suggest a research agenda to the community of researchers and advices to SCS practitioners. (C) 2016 Elsevier B.V. All rights reserved.

  • 50.
    Mendes, Fabiana
    et al.
    University of Oulu, FIN.
    Mendes, Emilia
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Salleh, Norsaremah
    IIUM, P.O., MYS.
    Oivo, Markku
    University of Oulu, FIN.
    Insights on the relationship between decision-making style and personality in software engineering2021In: Information and Software Technology, ISSN 0950-5849, E-ISSN 1873-6025, Vol. 136, article id 106586Article in journal (Refereed)
    Abstract [en]

    Context: Software development involves many activities, and decision making is an essential one. Various factors can impact a decision-making process, and by understanding such factors, one can improve the process. Since people are the ones making decisions, some human-related aspects are amongst those influencing factors. One such aspect is the decision maker's personality. Objective: This research investigates the relationship between decision-making style and personality within the context of software project development. Method: We conducted a survey in a population of Brazilian software engineers to gather data on their personality and decision-making style. Results: Data from 63 participants was gathered and resulted in the identification of seven statistically significant correlations between decision-making style and personality (personality factor and personality facets). Furthermore, we built a regression model in which decision-making style (DMS) was the response variable and personality factors the independent variables. The backward elimination procedure selected only agreeableness to explain 4.2% of DMS variation. The model accuracy was evaluated and deemed good enough. Regarding the moderation effect of demographic variables (age, educational level, experience, and role) on the relationship between DMS and Agreeableness, the analysis showed that only software engineers’ role has such effect. Conclusion: This paper contributes toward understanding the relationship between DMS and personality. Results show that the personality variable agreeableness can explain the variation in decision-making style. Furthermore, someone's role in a software development project can impact the strength of the relationship between DMS and agreeableness. © 2021 Elsevier B.V.

123 1 - 50 of 106
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf