Change search
Refine search result
1234567 1 - 50 of 393
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Rows per page
  • 5
  • 10
  • 20
  • 50
  • 100
  • 250
Sort
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
Select
The maximal number of hits you can export is 250. When you want to export more records please use the 'Create feeds' function.
  • 1.
    Abheeshta, Putta
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Comparative Analysis of Software Development Practices across Software Organisations: India and Sweden2016Independent thesis Advanced level (degree of Master (One Year)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Context. System Development Methodologies (SDM’s) have been an area of intensive research in the field of software engineering. Different software organisations adopt different development methodologies and use different development practices. The frequency of usage of development practices and acceptance factors for adoption of development methodology are crucial for software organisations. The factors of acceptance and development practices differ across geographical locations. Many challenges have been presented in the literature with respect to the mismatch of the development practices across organisations while collaborating across organisations in distributed development. There is no considerable amount of research done in context of differences across development practices and acceptance factors for adoption of a particular development methodology. Objectives. The primary objectives of the research are to find out a) differences in (i) practice usage (ii) acceptance factors such as organisational, social and cultural b) explore the reasons for the differences and also investigate consequences of such differences while collaborating, across organisations located in India and Sweden. Methods. A literature review was conducted by searching in scientific databases for identifying common agile and plan-driven development practices and acceptance theories for development methodologies. Survey was conducted across organisations located in India and Sweden to find out the usage frequency of development practices and acceptance factors. Ten interviews were conducted to investigate, reasons for differences and consequences of differences from the software practitioners from organisations located in India and Sweden. Literature evidences were used to support the results collected from interviews. Results. From the survey, organisations in India have adopted a higher frequency of plan driven practices when compared to Sweden and agile practices were adopted at higher frequency in Sweden when compared to India. The number of organisations adopting "pure agile" methodologies have been significantly higher in Sweden. There was significant differences were found across the acceptance factors such as cultural, organisational, image and career factors between India and Sweden. The factors such as cultural, social, human, business and organisational factors are responsible for such differences across development practices and acceptance factors. Challenges related to communication, coordination and control were found due to the differences, while collaborating between Indian and Sweden sites. Conclusions. The study signifies the importance of identifying the frequency of development practices and also the acceptance factors responsible for adoption of development methodologies in the software organisations. The mismatch between these practices will led to various challenges. The study draws insights into various non-technical factors such as cultural, human, organisational, business and social while collaborating between organisations. Variations across these factors will lead to many coordination, communication and control issues. Keywords: Development Practices, Agile Development, Plan Driven Development, Acceptance Factors, Global Software Development.

  • 2. Afzal, Wasif
    et al.
    Ghazi, Ahmad Nauman
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Itkonen, Juha
    Torkar, Richard
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Andrews, Anneliese
    Bhatti, Khurram
    An experiment on the effectiveness and efficiency of exploratory testing2015In: Empirical Software Engineering, ISSN 1382-3256, Vol. 20, no 3, 844-878 p.Article in journal (Refereed)
    Abstract [en]

    The exploratory testing (ET) approach is commonly applied in industry, but lacks scientific research. The scientific community needs quantitative results on the performance of ET taken from realistic experimental settings. The objective of this paper is to quantify the effectiveness and efficiency of ET vs. testing with documented test cases (test case based testing, TCT). We performed four controlled experiments where a total of 24 practitioners and 46 students performed manual functional testing using ET and TCT. We measured the number of identified defects in the 90-minute testing sessions, the detection difficulty, severity and types of the detected defects, and the number of false defect reports. The results show that ET found a significantly greater number of defects. ET also found significantly more defects of varying levels of difficulty, types and severity levels. However, the two testing approaches did not differ significantly in terms of the number of false defect reports submitted. We conclude that ET was more efficient than TCT in our experiment. ET was also more effective than TCT when detection difficulty, type of defects and severity levels are considered. The two approaches are comparable when it comes to the number of false defect reports submitted.

  • 3. Afzal, Wasif
    et al.
    Torkar, Richard
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Towards benchmarking feature subset selection methods for software fault prediction2016In: Studies in Computational Intelligence, Springer, 2016, 617, Vol. 617, 33-58 p.Chapter in book (Refereed)
    Abstract [en]

    Despite the general acceptance that software engineering datasets often contain noisy, irrelevant or redundant variables, very few benchmark studies of feature subset selection (FSS) methods on real-life data from software projects have been conducted. This paper provides an empirical comparison of state-of-the-art FSS methods: information gain attribute ranking (IG); Relief (RLF); principal component analysis (PCA); correlation-based feature selection (CFS); consistencybased subset evaluation (CNS); wrapper subset evaluation (WRP); and an evolutionary computation method, genetic programming (GP), on five fault prediction datasets from the PROMISE data repository. For all the datasets, the area under the receiver operating characteristic curve—the AUC value averaged over 10-fold cross-validation runs—was calculated for each FSS method-dataset combination before and after FSS. Two diverse learning algorithms, C4.5 and naïve Bayes (NB) are used to test the attribute sets given by each FSS method. The results show that although there are no statistically significant differences between the AUC values for the different FSS methods for both C4.5 and NB, a smaller set of FSS methods (IG, RLF, GP) consistently select fewer attributes without degrading classification accuracy. We conclude that in general, FSS is beneficial as it helps improve classification accuracy of NB and C4.5. There is no single best FSS method for all datasets but IG, RLF and GP consistently select fewer attributes without degrading classification accuracy within statistically significant boundaries. © Springer International Publishing Switzerland 2016.

  • 4.
    Aivars, Sablis
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Benefits of transactive memory systems in large-scale development2016Independent thesis Advanced level (degree of Master (Two Years)), 80 credits / 120 HE creditsStudent thesis
    Abstract [en]

    Context. Large-scale software development projects are those consisting of a large number of teams, maybe even spread across multiple locations, and working on large and complex software tasks. That means that neither a team member individually nor an entire team holds all the knowledge about the software being developed and teams have to communicate and coordinate their knowledge. Therefore, teams and team members in large-scale software development projects must acquire and manage expertise as one of the critical resources for high-quality work.

    Objectives. We aim at understanding whether software teams in different contexts develop transactive memory systems (TMS) and whether well-developed TMS leads to performance benefits as suggested by research conducted in other knowledge-intensive disciplines. Because multiple factors may influence the development of TMS, based on related TMS literature we also suggest to focus on task allocation strategies, task characteristics and management decisions regarding the project structure, team structure and team composition.

    Methods. We use the data from two large-scale distributed development companies and 9 teams, including quantitative data collected through a survey and qualitative data from interviews to measure transactive memory systems and their role in determining team performance. We measure teams’ TMS with a latent variable model. Finally, we use focus group interviews to analyze different organizational practices with respect to team management, as a set of decisions based on two aspects: team structure and composition, and task allocation.

    Results. Data from two companies and 9 teams are analyzed and the positive influence of well-developed TMS on team performance is found. We found that in large-scale software development, teams need not only well-developed team’s internal TMS, but also have well- developed and effective team’s external TMS. Furthermore, we identified practices that help of hinder development of TMS in large-scale projects.

    Conclusions. Our findings suggest that teams working in large-scale software development can achieve performance benefits if transactive memory practices within the team are supported with networking practices in the organization. 

  • 5.
    Akkineni, Srinivasu
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    The impact of RE process factors and organizational factors during alignment between RE and V&V: Systematic Literature Review and Survey2015Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Context: Requirements engineering (RE) and Verification and validation (V&V) areas are treated to be integrated and assure successful development of the software project. Therefore, activation of both competences in the early stages of the project will support products in meeting the customer expectation regarding the quality and functionality. However, this quality can be achieved by aligning RE and V&V. There are different practices such as requirements, verification, validation, control, tool etc. that are followed by organizations for alignment and to address different challenges faced during the alignment between RE and V&V. However, there is a requisite for studies to understand the alignment practices, challenges and factors, which can enable successful alignment between RE and V&V.

    Objectives: In this study, an exploratory investigation is carried out to know the impact of factors i.e. RE process and organizational factors during the alignment between RE and V&V. The main objectives of this study are:

    1. To find the list of RE practices that facilitate alignment between RE and V&V.
    2. To categorize RE practices with respect to their requirement phases.
    3. To find the list of RE process and organizational factors that influence alignment between RE and V&V besides their impact.
    4. To identify the challenges that are faced during the alignment between RE and V&V.
    5. To obtain list of challenges that are addressed by RE practices during the alignment between RE and V&V.

    Methods: In this study Systematic Literature Review (SLR) is conducted using snowballing procedure to identify the relevant information about RE practices, challenges, RE process factors and organizational factors. The studies were captured from Engineering Village database. Rigor and relevance analysis is performed to assess the quality of the studies obtained through SLR. Further, a questionnaire intended for industrial survey was prepared from the gathered literature and distributed to practitioners from the software industry in order to collect empirical information about this study. Thereafter, data obtained from industrial survey was analyzed using statistical analysis and chi-square significance test.

    Results: 20 studies were identified through SLR, which are relevant to this study. After analyzing the obtained studies, the list of RE process factors, organizational factors, challenges and RE practices during alignment between RE and V&V are gathered. Thereupon, an industrial survey is conducted from the obtained literature, which has obtained 48 responses. Alignment between RE and V&V possess an impact of RE process factors and organizational factors and this is also mentioned by the respondents of the survey. Moreover, this study finds an additional RE process factors and organizational factors during the alignment between RE and V&V, besides their impact. Another contribution is, addressing the unaddressed challenges by RE practices obtained through the literature. Additionally, validation of categorized RE practices with respect to their requirement phases is carried out.

    Conclusions: To conclude, the obtained results from this study will benefit practitioners for capturing more insight towards the alignment between RE and V&V. This study identified the impact of RE process factors and organizational factors during the alignment between RE and V&V along with the importance of challenges faced during the alignment between RE and V&V. This study also addressed the unaddressed challenges by RE practices obtained through literature. Respondents of the survey believe that many RE process and organizational factors have negative impact on the alignment between RE and V&V based on the size of an organization. In addition to this, validation of results for applying RE practices at different requirement phases is toted through survey. Practitioners can identify the benefits from this research and researchers can extend this study to remaining alignment practices.

  • 6.
    Alahyari, Hiva
    et al.
    Chalmers; Göteborgs Universitet, SWE.
    Berntsson Svensson, Richard
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Gorschek, Tony
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    A study of value in agile software development organizations2017In: Journal of Systems and Software, ISSN 0164-1212, E-ISSN 1873-1228, Vol. 125, 271-288 p.Article in journal (Refereed)
    Abstract [en]

    The Agile manifesto focuses on the delivery of valuable software. In Lean, the principles emphasise value, where every activity that does not add value is seen as waste. Despite the strong focus on value, and that the primary critical success factor for software intensive product development lies in the value domain, no empirical study has investigated specifically what value is. This paper presents an empirical study that investigates how value is interpreted and prioritised, and how value is assured and measured. Data was collected through semi-structured interviews with 23 participants from 14 agile software development organisations. The contribution of this study is fourfold. First, it examines how value is perceived amongst agile software development organisations. Second, it compares the perceptions and priorities of the perceived values by domains and roles. Third, it includes an examination of what practices are used to achieve value in industry, and what hinders the achievement of value. Fourth, it characterises what measurements are used to assure, and evaluate value-creation activities.

  • 7. Alegroth, Emil
    et al.
    Feldt, Robert
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Kolstrom, Pirjo
    Maintenance of automated test suites in industry: An empirical study on Visual GUI Testing2016In: Information and Software Technology, ISSN 0950-5849, E-ISSN 1873-6025, Vol. 73, 66-80 p.Article in journal (Refereed)
    Abstract [en]

    Context: Verification and validation (V&V) activities make up 20-50% of the total development costs of a software system in practice. Test automation is proposed to lower these V&V costs but available research only provides limited empirical data from industrial practice about the maintenance costs of automated tests and what factors affect these costs. In particular, these costs and factors are unknown for automated GUI-based testing. Objective: This paper addresses this lack of knowledge through analysis of the costs and factors associated with the maintenance of automated GUI-based tests in industrial practice. Method: An empirical study at two companies, Siemens and Saab, is reported where interviews about, and empirical work with, Visual GUI Testing is performed to acquire data about the technique's maintenance costs and feasibility. Results: 13 factors are observed that affect maintenance, e.g. tester knowledge/experience and test case complexity. Further, statistical analysis shows that developing new test scripts is costlier than maintenance but also that frequent maintenance is less costly than infrequent, big bang maintenance. In addition a cost model, based on previous work, is presented that estimates the time to positive return on investment (ROI) of test automation compared to manual testing. Conclusions: It is concluded that test automation can lower overall software development costs of a project while also having positive effects on software quality. However, maintenance costs can still be considerable and the less time a company currently spends on manual testing, the more time is required before positive, economic, ROI is reached after automation. (C) 2016 Elsevier B.V. All rights reserved.

  • 8. Alegroth, Emil
    et al.
    Feldt, Robert
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Ryrholm, Lisa
    Visual GUI testing in practice: challenges, problems and limitations2015In: Journal of Empirical Software Engineering, ISSN 1382-3256, E-ISSN 1573-7616, Vol. 20, no 3, 694-744 p.Article in journal (Refereed)
    Abstract [en]

    In today’s software development industry, high-level tests such as Graphical User Interface (GUI) based system and acceptance tests are mostly performed with manual practices that are often costly, tedious and error prone. Test automation has been proposed to solve these problems but most automation techniques approach testing from a lower level of system abstraction. Their suitability for high-level tests has therefore been questioned. High-level test automation techniques such as Record and Replay exist, but studies suggest that these techniques suffer from limitations, e.g. sensitivity to GUI layout or code changes, system implementation dependencies, etc. Visual GUI Testing (VGT) is an emerging technique in industrial practice with perceived higher flexi- bility and robustness to certain GUI changes than previous high-level (GUI) test automation techniques. The core of VGT is image recognition which is applied to analyze and interact with the bitmap layer of a system’s front end. By coupling image recognition with test scripts, VGT tools can emulate end user behavior on almost any GUI-based system, regardless of implementation language, operating system or platform. However, VGT is not without its own challenges, problems and limitations (CPLs) but, like for many other automated test techniques, there is a lack of empirically-based knowledge of these CPLs and how they impact industrial applicability. Crucially, there is also a lack of information on the cost of applying this type of test automation in industry. This manuscript reports an empirical, multi-unit case study performed at two Swedish companies that develop safety-critical software. It studies their transition from manual system test cases into tests auto- mated with VGT. In total, four different test suites that together include more than 300 high-level system test cases were automated for two multi-million lines of code systems. The results show that the transitioned test cases could find defects in the tested systems and that all applicable test cases could be automated. However, during these transition projects a number of hurdles had to be addressed; a total of 58 different CPLs were identified and then categorized into 26 types. We present these CPL types and an analysis of the implications for the transition to and use of VGT in industrial software development practice. In addition, four high-level solutions are presented that were identified during the study, which would address about half of the identified CPLs. Furthermore, collected metrics on cost and return on investment of the VGT transition are reported together with information about the VGT suites’ defect finding ability. Nine of the identified defects are reported, 5 of which were unknown to testers with extensive experience from using the manual test suites. The main conclusion from this study is that even though there are many challenges related to the transition and usage of VGT, the technique is still valuable, flexible and considered cost-effective by the industrial practitioners. The presented CPLs also provide decision support in the use and advancement of VGT and potentially other automated testing techniques similar to VGT, e.g. Record and Replay.

  • 9.
    Ali, Nauman Bin
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Is effectiveness sufficient to choose an intervention?: Considering resource use in empirical software engineering2016In: Proceedings of the 10th ACM/IEEE International Symposium on Empirical Software Engineering and Measurement, ESEM 2016, Ciudad Real, Spain, September 8-9, 2016, 2016, 54Conference paper (Refereed)
  • 10.
    Ali, Nauman bin
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Petersen, Kai
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    FLOW-assisted value stream mapping in the early phases of large-scale software development2016In: Journal of Systems and Software, ISSN 0164-1212, E-ISSN 1873-1228, Vol. 111, 213-227 p.Article in journal (Refereed)
    Abstract [en]

    Value stream mapping (VSM) has been successfully applied in the context of software process improvement. However, its current adaptations from Lean manufacturing focus mostly on the flow of artifacts and have taken no account of the essential information flows in software development. A solution specifically targeted toward information flow elicitation and modeling is FLOW. This paper aims to propose and evaluate the combination of VSM and FLOW to identify and alleviate information and communication related challenges in large-scale software development. Using case study research, FLOW-assisted VSM was used for a large product at Ericsson AB, Sweden. Both the process and the outcome of FLOW-assisted VSM have been evaluated from the practitioners’ perspective. It was noted that FLOW helped to systematically identify challenges and improvements related to information flow. Practitioners responded favorably to the use of VSM and FLOW, acknowledged the realistic nature and impact on the improvement on software quality, and found the overview of the entire process using the FLOW notation very useful. The combination of FLOW and VSM presented in this study was successful in systematically uncovering issues and characterizing their solutions, indicating their practical usefulness for waste removal with a focus on information flow related issues.

  • 11.
    Ali, Nauman
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Petersen, Kai
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Evaluating strategies for study selection in systematic literature studies2014In: ESEM '14 Proceedings of the 8th ACM/IEEE International Symposium on Empirical Software Engineering and Measurement, ACM , 2014, Vol. article 45Conference paper (Refereed)
    Abstract [en]

    Context: The study selection process is critical to improve the reliability of secondary studies. Goal: To evaluate the selection strategies commonly employed in secondary studies in software engineering. Method: Building on these strate- gies, a study selection process was formulated and evalu- ated in a systematic review. Results: The selection process used a more inclusive strategy than the one typically used in secondary studies, which led to additional relevant articles. Conclusions: The results indicates that a good-enough sam- ple could be obtained by following a less inclusive but more efficient strategy, if the articles identified as relevant for the study are a representative sample of the population, and there is a homogeneity of results and quality of the articles.

  • 12.
    Alégroth, Emil
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Feldt, Robert
    Chalmers, SWE.
    On the long-term use of visual gui testing in industrial practice: a case study2017In: Journal of Empirical Software Engineering, ISSN 1382-3256, E-ISSN 1573-7616, Vol. 22, no 6, 2937-2971 p.Article in journal (Refereed)
    Abstract [en]

    Visual GUI Testing (VGT) is a tool-driven technique for automated GUI-based testing that uses image recognition to interact with and assert the correctness of the behavior of a system through its GUI as it is shown to the user. The technique’s applicability, e.g. defect-finding ability, and feasibility, e.g. time to positive return on investment, have been shown through empirical studies in industrial practice. However, there is a lack of studies that evaluate the usefulness and challenges associated with VGT when used long-term (years) in industrial practice. This paper evaluates how VGT was adopted, applied and why it was abandoned at the music streaming application development company, Spotify, after several years of use. A qualitative study with two workshops and five well chosen employees is performed at the company, supported by a survey, which is analyzed with a grounded theory approach to answer the study’s three research questions. The interviews provide insights into the challenges, problems and limitations, but also benefits, that Spotify experienced during the adoption and use of VGT. However, due to the technique’s drawbacks, VGT has been abandoned for a new technique/framework, simply called the Test interface. The Test interface is considered more robust and flexible for Spotify’s needs but has several drawbacks, including that it does not test the actual GUI as shown to the user like VGT does. From the study’s results it is concluded that VGT can be used long-term in industrial practice but it requires organizational change as well as engineering best practices to be beneficial. Through synthesis of the study’s results, and results from previous work, a set of guidelines are presented that aim to aid practitioners to adopt and use VGT in industrial practice. However, due to the abandonment of the technique, future research is required to analyze in what types of projects the technique is, and is not, long-term viable. To this end, we also present Spotify’s Test interface solution for automated GUI-based testing and conclude that it has its own benefits and drawbacks.

  • 13.
    Alégroth, Emil
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Gonzalez-Huerta, Javier
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Towards a mapping of software technical debt onto testware2017In: Proceedings - 43rd Euromicro Conference on Software Engineering and Advanced Applications, SEAA 2017, Institute of Electrical and Electronics Engineers Inc. , 2017, 404-411 p., 8051379Conference paper (Refereed)
    Abstract [en]

    Technical Debt (TD) is a metaphor used to explain the negative impacts that sub-optimal design decisions have in the long-term perspective of a software project. Although TD is acknowledged by both researchers and practitioners to have strong negative impact on Software development, its study on Testware has so far been very limited. A gap in knowledge that is important to address due to the growing popularity of Testware (scripted automated testing) in software development practice.In this paper we present a mapping analysis that connects 21 well-known, Software, object-oriented TD items to Testware, establishing them as Testware Technical Debt (TTD) items. The analysis indicates that most Software TD items are applicable or observable as TTD items, often in similar form and with roughly the same impact as for Software artifacts (e.g. reducing quality of the produced artifacts, lowering the effectiveness and efficiency of the development process whilst increasing costs). In the analysis, we also identify three types of connections between software TD and TTD items with varying levels of impact and criticality. Additionally, the study finds support for previous research results in which specific TTD items unique to Testware were identified. Finally, the paper outlines several areas of future research into TTD. © 2017 IEEE.

  • 14.
    Alégroth, Emil
    et al.
    Chalmers, SWE.
    Gustafsson, Johan
    SAAB AB, SWE.
    Ivarsson, Henrik
    SAAB AB, SWE.
    Feldt, Robert
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Replicating Rare Software Failures with Exploratory Visual GUI Testing2017In: IEEE Software, ISSN 0740-7459, E-ISSN 1937-4194, Vol. 34, no 5, 53-59 p., 8048660Article in journal (Refereed)
    Abstract [en]

    Saab AB developed software that had a defect that manifested itself only after months of continuous system use. After years of customer failure reports, the defect still persisted, until Saab developed failure replication based on visual GUI testing. © 1984-2012 IEEE.

  • 15.
    Alégroth, Emil
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Matsuki, Shinsuke
    Veriserve Corporation, JPN.
    Vos, Tanja
    Open University of the Netherlands, NLD.
    Akemine, Kinji
    Nippon Telegraph and Telephone Corporation, JPN.
    Overview of the ICST International Software Testing Contest2017In: Proceedings - 10th IEEE International Conference on Software Testing, Verification and Validation, ICST 2017, IEEE Computer Society, 2017, 550-551 p.Conference paper (Refereed)
    Abstract [en]

    In the software testing contest, practitioners and researcher's are invited to test their test approaches against similar approaches to evaluate pros and cons and which is perceivably the best. The 2017 iteration of the contest focused on Graphical User Interface-driven testing, which was evaluated on the testing tool TESTONA. The winner of the competition was announced at the closing ceremony of the international conference on software testing (ICST), 2017. © 2017 IEEE.

  • 16.
    Amaradri, Anand Srivatsav
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Nutalapati, Swetha Bindu
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Continuous Integration, Deployment and Testing in DevOps Environment2016Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Context. Owing to a multitude of factors like rapid changes in technology, market needs, and business competitiveness, software companies these days are facing pressure to deliver software rapidly and on a frequent basis. For frequent and faster delivery, companies should be lean and agile in all phases of the software development life cycle. An approach called DevOps, which is based on agile principles has come into play. DevOps bridges the gap between development and operations teams and facilitates faster product delivery. The DevOps phenomenon has gained a wide popularity in the past few years, and several companies are adopting DevOps to leverage its perceived benefits. However, the organizations may face several challenges while adopting DevOps. There is a need to obtain a clear understanding of how DevOps functions in an organization.

    Objectives. The main aim of this study is to provide a clear understanding about how DevOps works in an organization to researchers and software practitioners. The objectives of the study are to identify the benefits of implementing DevOps in organizations where agile development is in practice, the challenges faced by organizations during DevOps adoption, to identify the solutions/ mitigation strategies, to overcome the challenges,the DevOps practices, and the problems faced by DevOps teams during continuous integration, deployment and testing.

    Methods. A mixed methods approach having both qualitative and quantitative research methods is used to accomplish the research objectives.A Systematic Literature Review is conducted to identify the benefits and challenges of DevOps adoption, and the DevOps practices. Interviews are conducted to further validate the SLR findings, and identify the solutions to overcome DevOps adoption challenges, and the DevOps practices. The SLR and interview results are mapped, and a survey questionnaire is designed.The survey is conducted to validate the qualitative data, and to identify the other benefits and challenges of DevOps adoption, solutions to overcome the challenges, DevOps practices, and the problems faced by DevOps teams during continuous integration, deployment and testing.

    Results. 31 primary studies relevant to the research are identified for conducting the SLR. After analysing the primary studies, an initial list of the benefits and challenges of DevOps adoption, and the DevOps practices is obtained. Based on the SLR findings, a semi-structured interview questionnaire is designed, and interviews are conducted. The interview data is thematically coded, and a list of the benefits, challenges of DevOps adoption and solutions to overcome them, DevOps practices, and problems faced by DevOps teams is obtained. The survey responses are statistically analysed, and a final list of the benefits of adopting DevOps, the adoption challenges and solutions to overcome them, DevOps practices and problems faced by DevOps teams is obtained.

    Conclusions. Using the mixed methods approach, a final list of the benefits of adopting DevOps, DevOps adoption challenges, solutions to overcome the challenges, practices of DevOps, and the problems faced by DevOps teams during continuous integration, deployment and testing is obtained. The list is clearly elucidated in the document. The final list can aid researchers and software practitioners in obtaining a better understanding regarding the functioning and adoption of DevOps. Also, it has been observed that there is a need for more empirical research in this domain.

  • 17. Ambreen, T.
    et al.
    Ikram, N.
    Usman, Muhammad
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Niazi, M.
    Empirical research in requirements engineering: trends and opportunities2016In: Requirements Engineering, ISSN 0947-3602, E-ISSN 1432-010X, 1-33 p.Article in journal (Refereed)
    Abstract [en]

    Requirements engineering (RE) being a foundation of software development has gained a great recognition in the recent era of prevailing software industry. A number of journals and conferences have published a great amount of RE research in terms of various tools, techniques, methods, and frameworks, with a variety of processes applicable in different software development domains. The plethora of empirical RE research needs to be synthesized to identify trends and future research directions. To represent a state-of-the-art of requirements engineering, along with various trends and opportunities of empirical RE research, we conducted a systematic mapping study to synthesize the empirical work done in RE. We used four major databases IEEE, ScienceDirect, SpringerLink and ACM and Identified 270 primary studies till the year 2012. An analysis of the data extracted from primary studies shows that the empirical research work in RE is on the increase since the year 2000. The requirements elicitation with 22 % of the total studies, requirements analysis with 19 % and RE process with 17 % are the major focus areas of empirical RE research. Non-functional requirements were found to be the most researched emerging area. The empirical work in the sub-area of requirements validation and verification is little and has a decreasing trend. The majority of the studies (50 %) used a case study research method followed by experiments (28 %), whereas the experience reports are few (6 %). A common trend in almost all RE sub-areas is about proposing new interventions. The leading intervention types are guidelines, techniques and processes. The interest in RE empirical research is on the rise as whole. However, requirements validation and verification area, despite its recognized importance, lacks empirical research at present. Furthermore, requirements evolution and privacy requirements also have little empirical research. These RE sub-areas need the attention of researchers for more empirical research. At present, the focus of empirical RE research is more about proposing new interventions. In future, there is a need to replicate existing studies as well to evaluate the RE interventions in more real contexts and scenarios. The practitioners’ involvement in RE empirical research needs to be increased so that they share their experiences of using different RE interventions and also inform us about the current requirements-related challenges and issues that they face in their work. © 2016 Springer-Verlag London

  • 18.
    Andersson, Linda
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Evaluation of HMI Development for Embedded System Control2014Independent thesis Basic level (degree of Bachelor)Student thesis
    Abstract [en]

    Context:The interface development is increasing in complexity and applications with a lot of functionalities that are reliable, understandable and easy to use have to be developed. To be able to compete, the time-to-market has to be short and cost effective. The development process is important and there are a lot of aspects that can be improved. The needs of the development and the knowledge among the developers are key factors. Here code reuse, standardization and the usability of the development tool plays an important role which could have a lot of positive impact on the development process and the quality of the final product. Objectives: A framework for describing important properties for HMI development tools is presented. A representative collection of two development tools are selected, described and based on the experiences from the case study its applicability is mapped to the evaluation framework. Methods: Interviews were made with HMI developers to get information from the field. Following that, a case study of two different development tools were made to highlight the pros and cons of each tool. Results: The properties presented in the evaluation framework are that the toolkit should be open for multiple platforms, accessible for the developer, it should support custom templates, require non-extensive coding knowledge and be reusable. The evaluated frameworks shows that it is hard to meet all the demands. Conclusions: To find a well suited development toolkit is not an easy task. The choice should be made depending on the needs of the HMI applications and the available development resources.

  • 19.
    ANWAR, WALEED
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Software Quality Characteristics Tested For Mobile Application Development: Literature Review and Empirical Survey2015Independent thesis Advanced level (degree of Master (One Year)), 10 credits / 15 HE creditsStudent thesis
    Abstract [en]

    Smart phones use is increasing day by day as there is large number of app users. Due to more use of apps, the testing of mobile application should be done correctly and flawlessly to ensure the effectiveness of mobile applications.

  • 20.
    Arnesson, Andreas
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Codename one and PhoneGap, a performance comparison2015Independent thesis Basic level (degree of Bachelor), 10 credits / 15 HE creditsStudent thesis
    Abstract [en]

    Creating smartphone applications for more than one operating system requires knowledge of several code languages, more code maintenance, higher development costs and longer development time. To make this easier cross-platform tools (CPTs) exist. But using a CPT can decrease performance of the application. Applications with low performance are more likely to get uninstalled and this makes developers lose income. There are four main CPT approaches hybrid, interpreter, web and cross-compiler. Each has different disadvantages .and advantages. This study will examine the performance difference between two CPTs, Codename One and PhoneGap. The performance measurements, CPU load, memory usage, energy consumption, time execution and application size will be made to compare the CPTs. If cross-compilers have better performance than other CPT approaches will also be investigated. An experiment where three applications are created with native Android, Codename One and PhoneGap will be made and performance measurements will be made. A literature study with research from IEEE and Engineering village will be conducted on different CPT approaches. PhoneGap performed best with shortest execution time, least energy consumption and least CPU usage while Codename One had smallest application size and least memory usage. The research available on performance for CPTs is short and not well done. The difference between PhoneGap and Codename One is not big except for writing to SQLite. No basis was found for the statement that cross-compilers have better performance than other CPT approaches.  

  • 21.
    Asklund, Ulf
    et al.
    Lund University, SWE.
    Höst, Martin
    Lund University, SWE.
    Wnuk, Krzysztof
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Experiences from Monitoring Effect of Architectural Changes2016In: Software Quality.: The Future of Systems- and Software Development / [ed] Winkler, Dietmar, Biffl, Stefan, Bergsmann, Johannes, 2016, 97-108 p.Conference paper (Refereed)
    Abstract [en]

    A common situation is that an initial architecture has been sufficient in the initial phases of a project, but when the size and complexity of the product increases the architecture must be changed. In this paper experiences are presented from changing an architecture into independent units, providing basic reuse of main functionality although giving higher priority to independence than reuse. An objective was also to introduce metrics in order to monitor the architectural changes. The change was studied in a case-study through weekly meetings with the team, collected metrics, and questionnaires. The new architecture was well received by the development team, who found it to be less fragile. Concerning the metrics for monitoring it was concluded that a high abstraction level was useful for the purpose.

  • 22. Avritzer, Alberto
    et al.
    Beecham, Sarah
    Britto, Ricardo
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Kroll, Josiane
    Menaché, Daniel
    Noll, John
    Paasivaara, Maria
    Extending Survivability Models for Global Software Development with Media Synchronicity Theory2015In: Proceeding of the IEEE 10th International Conference on Global Software Engineering, IEEE Communications Society, 2015, 23-32 p.Conference paper (Refereed)
    Abstract [en]

    In this paper we propose a new framework to assess survivability of software projects accounting for media capability details as introduced in Media Synchronicity Theory (MST). Specifically, we add to our global engineering framework the assessment of the impact of inadequate conveyance and convergence available in the communication infrastructure selected to be used by the project, on the system ability to recover from project disasters. We propose an analytical model to assess how the project recovers from project disasters related to process and communication failures. Our model is based on media synchronicity theory to account for how information exchange impacts recovery. Then, using the proposed model we evaluate how different interventions impact communication effectiveness. Finally, we parameterize and instantiate the proposed survivability model based on a data gathering campaign comprising thirty surveys collected from senior global software development experts at ICGSE'2014 and GSD'2015.

  • 23.
    Badampudi, Deepika
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Towards decision-making to choose among different component origins2016Licentiate thesis, comprehensive summary (Other academic)
    Abstract [en]

    Context: The amount of software in solutions provided in various domains is continuously growing. These solutions are a mix of hardware and software solutions, often referred to as software-intensive systems. Companies seek to improve the software development process to avoid delays or cost overruns related to the software development.  

    Objective: The overall goal of this thesis is to improve the software development/building process to provide timely, high quality and cost efficient solutions. The objective is to select the origin of the components (in-house, outsource, components off-the-shelf (COTS) or open source software (OSS)) that facilitates the improvement. The system can be built of components from one origin or a combination of two or more (or even all) origins. Selecting a proper origin for a component is important to get the most out of a component and to optimize the development. 

    Method: It is necessary to investigate the component origins to make decisions to select among different origins. We conducted a case study to explore the existing challenges in software development.  The next step was to identify factors that influence the choice to select among different component origins through a systematic literature review using a snowballing (SB) strategy and a database (DB) search. Furthermore, a Bayesian synthesis process is proposed to integrate the evidence from literature into practice.  

    Results: The results of this thesis indicate that the context of software-intensive systems such as domain regulations hinder the software development improvement. In addition to in-house development, alternative component origins (outsourcing, COTS, and OSS) are being used for software development. Several factors such as time, cost and license implications influence the selection of component origins. Solutions have been proposed to support the decision-making. However, these solutions consider only a subset of factors identified in the literature.   

    Conclusions: Each component origin has some advantages and disadvantages. Depending on the scenario, one component origin is more suitable than the others. It is important to investigate the different scenarios and suitability of the component origins, which is recognized as future work of this thesis. In addition, the future work is aimed at providing models to support the decision-making process.

  • 24.
    Badampudi, Deepika
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Claes, Wohlin
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Kai, Petersen
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Software Component Decision-making: In-house, OSS, COTS or Outsourcing: A Systematic Literature Review2016In: Journal of Systems and Software, ISSN 0164-1212, E-ISSN 1873-1228, Vol. 121, 105-124 p.Article in journal (Refereed)
    Abstract [en]

    Component-based software systems require decisions on component origins for acquiring components. A component origin is an alternative of where to get a component from. Objective: To identify factors that could influence the decision to choose among different component origins and solutions for decision-making (For example, optimization) in the literature. Method: A systematic review study of peer-reviewed literature has been conducted. Results: In total we included 24 primary studies. The component origins compared were mainly focused on in-house vs. COTS and COTS vs. OSS. We identified 11 factors affecting or influencing the decision to select a component origin. When component origins were compared, there was little evidence on the relative (either positive or negative) effect of a component origin on the factor. Most of the solutions were proposed for in-house vs. COTS selection and time, cost and reliability were the most considered factors in the solutions. Optimization models were the most commonly proposed technique used in the solutions. Conclusion: The topic of choosing component origins is a green field for research, and in great need of empirical comparisons between the component origins, as well of how to decide between different combinations of them.

  • 25.
    Badampudi, Deepika
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Wnuk, Krzysztof
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Wohlin, Claes
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Franke, Ulrik
    Swedish Institute of Computer Science, SWE.
    Šmite, Darja
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Cicchetti, Antonio
    Mälardalens högskola, SWE.
    A decision-making process-line for selection of software asset origins and components2018In: Journal of Systems and Software, ISSN 0164-1212, E-ISSN 1873-1228, Vol. 135, 88-104 p.Article in journal (Refereed)
    Abstract [en]

    Selecting sourcing options for software assets and components is an important process that helps companies to gain and keep their competitive advantage. The sourcing options include: in-house, COTS, open source and outsourcing. The objective of this paper is to further refine, extend and validate a solution presented in our previous work. The refinement includes a set of decision-making activities, which are described in the form of a process-line that can be used by decision-makers to build their specific decision-making process. We conducted five case studies in three companies to validate the coverage of the set of decision-making activities. The solution in our previous work was validated in two cases in the first two companies. In the validation, it was observed that no activity in the proposed set was perceived to be missing, although not all activities were conducted and the activities that were conducted were not executed in a specific order. Therefore, the refinement of the solution into a process-line approach increases the flexibility and hence it is better in capturing the differences in the decision-making processes observed in the case studies. The applicability of the process-line was then validated in three case studies in a third company. © 2017 Elsevier Inc.

  • 26.
    Badampudi, Deepika
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Wohlin, Claes
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Bayesian Synthesis for Knowledge Translation in Software Engineering: Method and Illustration2016In: 2016 42th Euromicro Conference on Software Engineering and Advanced Applications (SEAA), IEEE, 2016Conference paper (Refereed)
    Abstract [en]

    Systematic literature reviews in software engineering are necessary to synthesize evidence from multiple studies to provide knowledge and decision support. However, synthesis methods are underutilized in software engineering research. Moreover, translation of synthesized data (outcomes of a systematic review) to provide recommendations for practitioners is seldom practiced. The objective of this paper is to introduce the use of Bayesian synthesis in software engineering research, in particular to translate research evidence into practice by providing the possibility to combine contextualized expert opinions with research evidence. We adopted the Bayesian synthesis method from health research and customized it to be used in software engineering research. The proposed method is described and illustrated using an example from the literature. Bayesian synthesis provides a systematic approach to incorporate subjective opinions in the synthesis process thereby making the synthesis results more suitable to the context in which they will be applied. Thereby, facilitating the interpretation and translation of knowledge to action/application. None of the synthesis methods used in software engineering allows for the integration of subjective opinions, hence using Bayesian synthesis can add a new dimension to the synthesis process in software engineering research.

  • 27.
    Badampudi, Deepika
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Wohlin, Claes
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Petersen, Kai
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Experiences from Using Snowballing and Database Searches in Systematic Literature Studies2015Conference paper (Refereed)
    Abstract [en]

    Background: Systematic literature studies are commonly used in software engineering. There are two main ways of conducting the searches for these type of studies; they are snowballing and database searches. In snowballing, the reference list (backward snowballing - BSB) and citations (forward snowballing - FSB) of relevant papers are reviewed to identify new papers whereas in a database search, different databases are searched using predefined search strings to identify new papers. Objective: Snowballing has not been in use as extensively as database search. Hence it is important to evaluate its efficiency and reliability when being used as a search strategy in literature studies. Moreover, it is important to compare it to database searches. Method: In this paper, we applied snowballing in a literature study, and reflected on the outcome. We also compared database search with backward and forward snowballing. Database search and snowballing were conducted independently by different researchers. The searches of our literature study were compared with respect to the efficiency and reliability of the findings. Results: Out of the total number of papers found, snowballing identified 83% of the papers in comparison to 46% of the papers for the database search. Snowballing failed to identify a few relevant papers, which potentially could have been addressed by identifying a more comprehensive start set. Conclusion: The efficiency of snowballing is comparable to database search. It can potentially be more reliable than a database search however, the reliability is highly dependent on the creation of a suitable start set.

  • 28.
    Bakhtyar, Shoaib
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Ghazi, Ahmad Nauman
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    On Improving Research Methodology Course at Blekinge Institute of Technology2016Conference paper (Refereed)
    Abstract [en]

    The Research Methodology in Software Engineering and Computer Science (RM) is a compulsory course that must be studied by graduate students at Blekinge Institute of Technology (BTH) prior to undertaking their theses work. The course is focused on teaching research methods and techniques for data collection and analysis in the fields of Computer Science and Software Engineering. It is intended that the course should help students in practically applying appropriate research methods in different courses (in addition to the RM course) including their Master’s theses. However, it is believed that there exist deficiencies in the course due to which the course implementation (learning and assessment activities) as well as the performance of different participants (students, teachers, and evaluators) are affected negatively. In this article our aim is to investigate potential deficiencies in the RM course at BTH in order to provide a concrete evidence on the deficiencies faced by students, evaluators, and teachers in the course. Additionally, we suggest recommendations for resolving the identified deficiencies. Our findings gathered through semi-structured interviews with students, teachers, and evaluators in the course are presented in this article. By identifying a total of twenty-one deficiencies from different perspectives, we found that there exist critical deficiencies at different levels within the course. Furthermore, in order to overcome the identified deficiencies, we suggest seven recommendations that may be implemented at different levels within the course and the study program. Our suggested recommendations, if implemented, will help in resolving deficiencies in the course, which may lead to achieving an improved teaching and learning in the RM course at BTH. 

  • 29.
    Barysau, Mikalai
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Developers' performance analysis based on code review data: How to perform comparisons of different groups of developers2016Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Nowadays more and more IT companies switch to the distributed development model. This trend has a number of advantages and disadvantages, which are studied by researchers through different aspects of the modern code development. One of such aspects is code review, which is used by many companies and produces a big amount of data. A number of studies describe different data mining and data analysis approaches, which are based on a link between code review data and performance. According to these studies analysis of the code review data can give a good insight to the development performance and help software companies to detect a number of performance issues and improve the quality of their code.

    The main goal of this Thesis was to collect reported knowledge about the code review data analysis and implement a solution, which will help to perform such analysis in a real industrial setting.

    During the performance of the research the author used multiple research techniques, such as Snowballing literature review, Case study and Semi-structured interviews.

    The results of the research contain a list of code review data metrics, extracted from the literature and a software tool for collecting and visualizing data.

    The performed literature review showed that among the literature sources, related to the code review, relatively small amount of sources are related to the topic of the Thesis, which exposes a field for a future research. Application of the found metrics showed that most of the found metrics are possible to use in the context of the studied environment. Presentation of the results and interviews with company's representatives showed that the graphic plots are useful for observing trends and correlations in development of company's development sites and help the company to improve its performance and decision making process.

  • 30.
    Berntsson Svensson, Richard
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Regnell, Björn
    Lunds universitet, SWE.
    Is role playing in Requirements Engineering Education increasing learning outcome?2017In: Requirements Engineering, ISSN 0947-3602, E-ISSN 1432-010X, Vol. 22, no 4, 475-489 p.Article in journal (Refereed)
    Abstract [en]

    Requirements Engineering has attracted a great deal of attention from researchers and practitioners in recent years. This increasing interest requires academia to provide students with a solid foundation in the subject matter. In Requirements Engineering Education (REE), it is important to cover three fundamental topics: traditional analysis and modeling skills, interviewing skills for requirements elicitation, and writing skills for specifying requirements. REE papers report about using role playing as a pedagogical tool; however, there is a surprising lack of empirical evidence on its utility. In this paper we investigate whether a higher grade in a role playing project have an effect on students’ score in an individual written exam in a Requirements Engineering course. Data are collected from 412 students between the years of 2007 and 2014 at Lund University and Chalmers | University of Gothenburg. The results show that students who received a higher grade in the role playing project scored statistically significant higher in the written exam compared to the students with a lower role playing project grade. © 2016 Springer-Verlag London

  • 31.
    Berntsson Svensson, Richard
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Taghavianfar, Maryam
    Selecting creativity techniques for creative requirements: An evaluation of four techniques using creativity workshops2015In: 2015 IEEE 23RD INTERNATIONAL REQUIREMENTS ENGINEERING CONFERENCE (RE), IEEE, 2015, 66-75 p.Conference paper (Refereed)
    Abstract [en]

    Requirements engineering is recognized as a creative process where stakeholders jointly discover new creative ideas for innovative and novel products that eventually are expressed as requirements. This paper evaluates four different creativity techniques, namely Hall of Fame, Constraint Removal, Brainstorming, and Idea Box, using creativity workshops with students and industry practitioners. In total, 34 creativity workshops were conducted with 90 students from two universities, and 86 industrial practitioners from six companies. The results from this study indicate that Brainstorming can generate by far the most ideas, while Hall of Fame generates most creative ideas. Idea Box generates the least number of ideas, and the least number of creative ideas. Finally, Hall of Fame was the technique that led to the most number of requirements that was included in future releases of the products.

  • 32.
    Betz, Stefanie
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Andreas, Oberweis
    Rolf, Stephan
    Knowledge transfer in offshore outsourcing software development projects: an analysis of the challenges and solutions from German clients2014In: Expert systems (Print), ISSN 0266-4720, E-ISSN 1468-0394, Vol. 31, no 3Article in journal (Refereed)
    Abstract [en]

    Knowledge transfer is a critical factor in ensuring the success of offshore outsourcing software development projects and is, in many cases, neglected. Compared to in-house or co-located projects, however, such globally distributed projects feature far greater complexity. In addition to language barriers, factors such as cultural differences, time zone variance, distinct methods and practices, as well as unique equipment and infrastructure can all lead to problems that negatively impact knowledge transfer, and as a result, a project's overall success. In order to help minimise such risks to knowledge transfer, we conducted a research study based on expert interviews in six projects. Our study used German clients and focused on offshore outsourcing software development projects. We first identified known problems in knowledge transfer that can occur with offshore outsourcing projects. Then we collected best-practice solutions proven to overcome the types of problems described. Afterward, we conducted a follow-up study to evaluate our findings. In this subsequent stage, we presented our findings to a different group of experts in five projects and asked them to evaluate these solutions and recommendations in terms of our original goal, namely to find ways to minimise knowledge-transfer problems in offshore outsourcing software development projects. Thus, the result of our study is a catalog of evaluated solutions and associated recommendations mapped to the identified problem areas.

  • 33.
    bin Ali, Nauman
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Operationalization of lean thinking through value stream mapping with simulation and FLOW2015Doctoral thesis, comprehensive summary (Other academic)
    Abstract [en]

    Background: The continued success of Lean thinking beyond manufacturing has led to an increasing interest to utilize it in software engineering (SE). Value Stream Mapping (VSM) had a pivotal role in the operationalization of Lean thinking. However, this has not been recognized in SE adaptations of Lean. Furthermore, there are two main shortcomings in existing adaptations of VSM for an SE context. First, the assessments for the potential of the proposed improvements are based on idealistic assertions. Second, the current VSM notation and methodology are unable to capture the myriad of significant information flows, which in software development go beyond just the schedule information about the flow of a software artifact through a process. Objective: This thesis seeks to assess Software Process Simulation Modeling (SPSM) as a solution to the first shortcoming of VSM. In this regard, guidelines to perform simulation-based studies in industry are consolidated, and the usefulness of VSM supported with SPSM is evaluated. To overcome the second shortcoming of VSM, a suitable approach for capturing rich information flows in software development is identified and its usefulness to support VSM is evaluated. Overall, an attempt is made to supplement existing guidelines for conducting VSM to overcome its known shortcomings and support adoption of Lean thinking in SE. The usefulness and scalability of these proposals is evaluated in an industrial setting. Method: Three literature reviews, one systematic literature review, four industrial case studies, and a case study in an academic context were conducted as part of this research. Results: Little evidence to substantiate the claims of the usefulness of SPSM was found. Hence, prior to combining it with VSM, we consolidated the guidelines to conduct an SPSM based study and evaluated the use of SPSM in academic and industrial contexts. In education, it was found to be a useful complement to other teaching methods, and in the industry, it triggered useful discussions and was used to challenge practitioners’ perceptions about the impact of existing challenges and proposed improvements. The combination of VSM with FLOW (a method and notation to capture information flows, since existing VSM adaptions for SE are insufficient for this purpose) was successful in identifying challenges and improvements related to information needs in the process. Both proposals to support VSM with simulation and FLOW led to identification of waste and improvements (which would not have been possible with conventional VSM), generated more insightful discussions and resulted in more realistic improvements. Conclusion: This thesis characterizes the context and shows how SPSM was beneficial both in the industrial and academic context. FLOW was found to be a scalable, lightweight supplement to strengthen the information flow analysis in VSM. Through successful industrial application and uptake, this thesis provides evidence of the usefulness of the proposed improvements to the VSM activities.

  • 34.
    Bin Ali, Nauman
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering. Blekinge Inst Technol, Karlskrona, Sweden..
    Petersen, Kai
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering. Blekinge Inst Technol, Karlskrona, Sweden..
    Nicolau de Franca, Breno Bernard
    Univ Fed Rio de Janeiro, ESE Grp, PESC COPPE, BR-68511 Rio De Janeiro, Brazil..
    Evaluation of simulation-assisted value stream mapping for software product development: Two industrial cases2015In: Information and Software Technology, ISSN 0950-5849, E-ISSN 1873-6025, Vol. 68, 45-61 p.Article in journal (Refereed)
    Abstract [en]

    Context: Value stream mapping (VSM) as a tool for lean development has led to significant improvements in different industries. In a few studies, it has been successfully applied in a software engineering context. However, some shortcomings have been observed in particular failing to capture the dynamic nature of the software process to evaluate improvements i.e. such improvements and target values are based on idealistic situations. Objective: To overcome the shortcomings of VSM by combining it with software process simulation modeling, and to provide reflections on the process of conducting VSM with simulation. Method: Using case study research, VSM was used for two products at Ericsson AB, Sweden. Ten workshops were conducted in this regard. Simulation in this study was used as a tool to support discussions instead of as a prediction tool. The results have been evaluated from the perspective of the participating practitioners, an external observer, and reflections of the researchers conducting the simulation that was elicited by the external observer. Results: Significant constraints hindering the product development from reaching the stated improvement goals for shorter lead time were identified. The use of simulation was particularly helpful in having more insightful discussions and to challenge assumptions about the likely impact of improvements. However, simulation results alone were found insufficient to emphasize the importance of reducing waiting times and variations in the process. Conclusion: The framework to assist VSM with simulation presented in this study was successfully applied in two cases. The involvement of various stakeholders, consensus building steps, emphasis on flow (through waiting time and variance analysis) and the use of simulation proposed in the framework led to realistic improvements with a high likelihood of implementation. (C) 2015 Elsevier B.V. All rights reserved.

  • 35.
    bin Ali, Nauman
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Petersen, Kai
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Wohlin, Claes
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    A systematic literature review on the industrial use of software process simulation2014In: Journal of Systems and Software, ISSN 0164-1212, Vol. 97Article in journal (Refereed)
    Abstract [en]

    Context Software process simulation modelling (SPSM) captures the dynamic behaviour and uncertainty in the software process. Existing literature has conflicting claims about its practical usefulness: SPSM is useful and has an industrial impact; SPSM is useful and has no industrial impact yet; SPSM is not useful and has little potential for industry. Objective To assess the conflicting standpoints on the usefulness of SPSM. Method A systematic literature review was performed to identify, assess and aggregate empirical evidence on the usefulness of SPSM. Results In the primary studies, to date, the persistent trend is that of proof-of-concept applications of software process simulation for various purposes (e.g. estimation, training, process improvement, etc.). They score poorly on the stated quality criteria. Also only a few studies report some initial evaluation of the simulation models for the intended purposes. Conclusion There is a lack of conclusive evidence to substantiate the claimed usefulness of SPSM for any of the intended purposes. A few studies that report the cost of applying simulation do not support the claim that it is an inexpensive method. Furthermore, there is a paramount need for improvement in conducting and reporting simulation studies with an emphasis on evaluation against the intended purpose.

  • 36.
    bin Ali, Nauman
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Unterkalmsteiner, Michael
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Use and evaluation of simulation for software process education: a case study2014Conference paper (Refereed)
    Abstract [en]

    Software Engineering is an applied discipline and concepts are difficult to grasp only at a theoretical level alone. In the context of a project management course, we introduced and evaluated the use of software process simulation (SPS) based games for improving students’ understanding of software development processes. The effects of the intervention were measured by evaluating the students’ arguments for choosing a particular development process. The arguments were assessed with the Evidence-Based Reasoning framework, which was extended to assess the strength of an argument. The results indicate that students generally have difficulty providing strong arguments for their choice of process models. Nevertheless, the assessment indicates that the intervention of the SPS game had a positive impact on the students’ arguments. Even though the illustrated argument assessment approach can be used to provide formative feedback to students, its use is rather costly and cannot be considered a replacement for traditional assessments.

  • 37.
    Bjarnason, Elizabeth
    et al.
    Lund Univ, SWE.
    Morandini, Mirko
    Fdn Bruno Kessler, ITA.
    Borg, Markus
    Lund Univ, SWE.
    Unterkalmsteiner, Michael
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Felderer, Michael
    Univ Innsbruck, AUT.
    Staats, Matthew
    Google Inc, CHE.
    2nd International Workshop on Requirements Engineering and Testing (RET 2015)2015In: 2015 IEEE/ACM 37TH IEEE INTERNATIONAL CONFERENCE ON SOFTWARE ENGINEERING, VOL 2, IEEE , 2015, 997-998 p.Conference paper (Refereed)
    Abstract [en]

    The RET (Requirements Engineering and Testing) workshop provides a meeting point for researchers and practitioners from the two separate fields of Requirements Engineering (RE) and Testing. The goal is to improve the connection and alignment of these two areas through an exchange of ideas, challenges, practices, experiences and results. The long term aim is to build a community and a body of knowledge within the intersection of RE and Testing. One of the main outputs of the 1st workshop was a collaboratively constructed map of the area of RET showing the topics relevant to RET for these. The 2nd workshop will continue in the same interactive vein and include a keynote, paper presentations with ample time for discussions, and a group exercise. For true impact and relevance this cross-cutting area requires contribution from both RE and Testing, and from both researchers and practitioners. For that reason we welcome a range of paper contributions from short experience papers to full research papers that both clearly cover connections between the two fields.

  • 38. Bjarnason, Elizabeth
    et al.
    Unterkalmsteiner, Michael
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Borg, Markus
    Engström, Emelie
    A Multi-Case Study of Agile Requirements Engineering and the Use of Test Cases as Requirements2016In: Information and Software Technology, ISSN 0950-5849, E-ISSN 1873-6025, Vol. 77, 61-79 p.Article in journal (Refereed)
    Abstract [en]

    [Context] It is an enigma that agile projects can succeed ‘without requirements’ when weak requirementsengineering is a known cause for project failures. While agile development projects often manage well withoutextensive requirements test cases are commonly viewed as requirements and detailed requirements are documented astest cases.[Objective] We have investigated this agile practice of using test cases as requirements to understand how test casescan support the main requirements activities, and how this practice varies.[Method] We performed an iterative case study at three companies and collected data through 14 interviews and 2focus groups.[Results] The use of test cases as requirements poses both benefits and challenges when eliciting, validating,verifying, and managing requirements, and when used as a documented agreement. We have identified five variants ofthe test-cases-as-requirements practice, namely de facto, behaviour-driven, story-test driven, stand-alone strict andstand-alone manual for which the application of the practice varies concerning the time frame of requirementsdocumentation, the requirements format, the extent to which the test cases are a machine executable specification andthe use of tools which provide specific support for the practice of using test cases as requirements.[Conclusions] The findings provide empirical insight into how agile development projects manage andcommunicate requirements. The identified variants of the practice of using test cases as requirements can be used toperform in-depth investigations into agile requirements engineering. Practitioners can use the providedrecommendations as a guide in designing and improving their agile requirements practices based on projectcharacteristics such as number of stakeholders and rate of change.

    The full text will be freely available from 2018-04-28 08:02
  • 39.
    Bjarnason, Elizabeth
    et al.
    Lund University.
    Unterkalmsteiner, Michael
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Engström, Emelie
    Lund University.
    Borg, Markus
    Lund University.
    An Industrial Case Study on the Use of Test Cases as Requirements2015In: Lecture Notes in Business Information, Springer, 2015, 27-39 p.Conference paper (Refereed)
    Abstract [en]

    It is a conundrum that agile projects can succeed 'without requirements' when weak requirements engineering is a known cause for project failures. While Agile development projects often manage well without extensive requirements documentation, test cases are commonly used as requirements. We have investigated this agile practice at three companies in order to understandhow test cases can fill the role of requirements. We performed a case study based on twelve interviews performed in a previous study.The findings include a range of benefits and challenges in using test cases for eliciting, validating, verifying, tracing and managing requirements. In addition, we identified three scenarios for applying the practice, namely as a mature practice, as a de facto practice and as part of an agile transition. The findings provide insights into how the role of requirements may be met in agile development including challenges to consider.

  • 40.
    Bjäreholt, Johan
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    RISC-V Compiler Performance:A Comparison between GCC and LLVM/clang2017Independent thesis Basic level (degree of Bachelor), 10 credits / 15 HE creditsStudent thesis
    Abstract [en]

    RISC-V is a new open-source instruction set architecture (ISA) that in De-cember 2016 manufactured its rst mass-produced processors. It focuses onboth eciency and performance and diers from other open-source architec-tures by not having a copyleft license permitting vendors to freely design,manufacture and sell RISC-V chips without any fees nor having to sharetheir modications on the reference implementations of the architecture.The goal of this thesis is to evaluate the performance of the GCC andLLVM/clang compilers support for the RISC-V target and their ability tooptimize for the architecture. The performance will be evaluated from ex-ecuting the CoreMark and Dhrystone benchmarks are both popular indus-try standard programs for evaluating performance on embedded processors.They will be run on both the GCC and LLVM/clang compilers on dierentoptimization levels and compared in performance per clock to the ARM archi-tecture which is mature yet rather similar to RISC-V. The compiler supportfor the RISC-V target is still in development and the focus of this thesis willbe the current performance dierences between the GCC and LLVM com-pilers on this architecture. The platform we will execute the benchmarks onwil be the Freedom E310 processor on the SiFive HiFive1 board for RISC-Vand a ARM Cortex-M4 processor by Freescale on the Teensy 3.6 board. TheFreedom E310 is almost identical to the reference Berkeley Rocket RISC-Vdesign and the ARM Coretex-M4 processor has a similar clock speed and isaimed at a similar target audience.The results presented that the -O2 and -O3 optimization levels on GCCfor RISC-V performed very well in comparison to our ARM reference. Onthe lower -O1 optimization level and -O0 which is no optimizations and -Oswhich is -O0 with optimizations for generating a smaller executable code sizeGCC performs much worse than ARM at 46% of the performance at -O1,8.2% at -Os and 9.3% at -O0 on the CoreMark benchmark with similar resultsin Dhrystone except on -O1 where it performed as well as ARM. When turn-ing o optimizations (-O0) GCC for RISC-V was 9.2% of the performanceon ARM in CoreMark and 11% in Dhrystone which was unexpected andneeds further investigation. LLVM/clang on the other hand crashed whentrying to compile our CoreMark benchmark and on Dhrystone the optimiza-tion options made a very minor impact on performance making it 6.0% theperformance of GCC on -O3 and 5.6% of the performance of ARM on -O3, soeven with optimizations it was still slower than GCC without optimizations.In conclusion the performance of RISC-V with the GCC compiler onthe higher optimization levels performs very well considering how young theRISC-V architecture is. It does seems like there could be room for improvement on the lower optimization levels however which in turn could also pos-sibly increase the performance of the higher optimization levels. With theLLVM/clang compiler on the other hand a lot of work needs to be done tomake it competetive in both performance and stability with the GCC com-piler and other architectures. Why the -O0 optimization is so considerablyslower on RISC-V than on ARM was also very unexpected and needs furtherinvestigation.

  • 41.
    Björneskog, Amanda
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Goniband Shoshtari, Nima
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Comparison of Security and Risk awareness between different age groups2017Independent thesis Basic level (degree of Bachelor), 10 credits / 15 HE creditsStudent thesis
    Abstract [en]

    The Internet have become a 'necessity' in the everyday life of just below 50\% of the world population. With the growth of the Internet and it creating a great platform to help people and making life easier, it has also brought a lot of malicious situations. Now a days people hack or uses social engineering on other people for a living, scamming and fraud is part of their daily life. Therefore security awareness is truly important and sometimes vital.We wanted to look at the difference in security awareness depending on which year you were born, in relation to the IT-boom and growth of the Internet. Does it matter if you lived through the earlier stages of the Internet or not? We found that the security awareness did increase with age, but if it was due to the candidates growing up before or after the IT-boom or due to the fact that younger people tend to be more inattentive is hard to tell. Our result is that the age group, 16-19, were more prone to security risks, due to an indifferent mindset regarding their data and information.

  • 42.
    Bodireddigari, Sai Srinivas
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    A Framework To Measure the Trustworthiness of the User Feedback in Mobile Application Stores2016Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Context: Mobile application stores like Google Play, Apple store, Windows store have over 3 million apps. Users download the applications from their respective stores and they generally prefer the apps with the highest ratings. In response to the present situation, application stores provided the categories like editor’s choice or top charts, providing better visibility for the applications. Customer reviews play such critical role in the development of the application and the organization, in such case there might be flawed reviews or biased opinions about the application due to many factors. The biased opinions and flawed reviews are likely to cause user review untrustworthiness. The reviews or ratings in the mobile application stores are used by the organizations to make the applications more efficient and more adaptable to the user. The context leads to importance of the user’s review trustworthiness and managing the trustworthiness in the user feedback by knowing the causes of mistrust. Hence, there is a need for a framework to understand the trustworthiness in the user given feedback.

    Objectives: In the following study the author aims for the accomplishment of the following objectives, firstly, exploring the causes of untrustworthiness in user feedback for an application in the mobile application stores such as google play store. Secondly, Exploring the effects of trustworthiness on the users and developers. Finally, the aim is to propose a framework for managing the trustworthiness in the feedback.

    Methods: To accomplish the objectives, author used qualitative research method. The data collection method is an interview-based survey that was conducted with 13 participants, to find out the causes of untrustworthiness in the user feedback from user’s perspective and developer’s perspective. Author follows thematic coding for qualitative data analysis.

    Results:Author identifies 11 codes from the description of the transcripts and explores the relationship among the trustworthiness with the causes. 11 codes were put into 4 themes, and a thematic network is created between the themes. The relations were then analyzed with cost-effect analysis.

    Conclusions: We conclude that 11 causes effect the trustworthiness according to user’s perspective and 9 causes effect the trustworthiness according to the developer’s perspective, from the analysis. Segregating the trustworthy feedback from the untrustworthy feedback is important for the developers, as the next releases should be planned based on that. Finally, an inclusion and exclusion criteria to help developers manage trustworthy user feedback is defined. 

  • 43.
    Borg, Markus
    et al.
    RISE SICS AB, SWE.
    Alegroth, Emil
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Runeson, Per
    Lunds Universitet, SWE.
    Software Engineers' Information Seeking Behavior in Change Impact Analysis: An Interview Study2017In: IEEE International Conference on Program Comprehension, IEEE Computer Society , 2017, 12-22 p.Conference paper (Refereed)
    Abstract [en]

    Software engineers working in large projects must navigate complex information landscapes. Change Impact Analysis (CIA) is a task that relies on engineers' successful information seeking in databases storing, e.g., source code, requirements, design descriptions, and test case specifications. Several previous approaches to support information seeking are task-specific, thus understanding engineers' seeking behavior in specific tasks is fundamental. We present an industrial case study on how engineers seek information in CIA, with a particular focus on traceability and development artifacts that are not source code. We show that engineers have different information seeking behavior, and that some do not consider traceability particularly useful when conducting CIA. Furthermore, we observe a tendency for engineers to prefer less rigid types of support rather than formal approaches, i.e., engineers value support that allows flexibility in how to practically conduct CIA. Finally, due to diverse information seeking behavior, we argue that future CIA support should embrace individual preferences to identify change impact by empowering several seeking alternatives, including searching, browsing, and tracing. © 2017 IEEE.

  • 44.
    Borg, Markus
    et al.
    SICS Swedish ICT AB, SWE.
    Luis de la Vara, Jose
    Univ Carlos III Madrid, ESP.
    Wnuk, Krzysztof
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Practitioners' Perspectives on Change Impact Analysis for Safety-Critical Software: A Preliminary Analysis2016In: COMPUTER SAFETY, RELIABILITY, AND SECURITY, SAFECOMP 2016, 2016, 346-358 p.Conference paper (Refereed)
    Abstract [en]

    Safety standards prescribe change impact analysis (CIA) during evolution of safety-critical software systems. Although CIA is a fundamental activity, there is a lack of empirical studies about how it is performed in practice. We present a case study on CIA in the context of an evolving automation system, based on 14 interviews in Sweden and India. Our analysis suggests that engineers on average spend 50-100 h on CIA per year, but the effort varies considerably with the phases of projects. Also, the respondents presented different connotations to CIA and perceived the importance of CIA differently. We report the most pressing CIA challenges, and several ideas on how to support future CIA. However, we show that measuring the effect of such improvement solutions is non-trivial, as CIA is intertwined with other development activities. While this paper only reports preliminary results, our work contributes empirical insights into practical CIA.

  • 45.
    Borg, Markus
    et al.
    SICS Swedish ICT AB, SWE.
    Wnuk, Krzysztof
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Regnell, Björn
    Lund University, SWE.
    Runeson, Per
    Lund University, SWE.
    Supporting Change Impact Analysis Using a Recommendation System: An Industrial Case Study in a Safety-Critical Context2017In: IEEE Transactions on Software Engineering, ISSN 0098-5589, E-ISSN 1939-3520, Vol. 43, no 7, 675-700 p.Article in journal (Refereed)
    Abstract [en]

    Abstract—Change Impact Analysis (CIA) during software evolution of safety-critical systems is a labor-intensive task. Severalauthors have proposed tool support for CIA, but very few tools were evaluated in industry. We present a case study on ImpRec, arecommendation System for Software Engineering (RSSE), tailored for CIA at a process automation company. ImpRec builds onassisted tracing, using information retrieval solutions and mining software repositories to recommend development artifacts, potentiallyimpacted when resolving incoming issue reports. In contrast to the majority of tools for automated CIA, ImpRec explicitly targetsdevelopment artifacts that are not source code. We evaluate ImpRec in a two-phase study. First, we measure the correctness ofImpRec’s recommendations by a simulation based on 12 years’ worth of issue reports in the company. Second, we assess the utilityof working with ImpRec by deploying the RSSE in two development teams on different continents. The results suggest that ImpRecpresents about 40 percent of the true impact among the top-10 recommendations. Furthermore, user log analysis indicates thatImpRec can support CIA in industry, and developers acknowledge the value of ImpRec in interviews. In conclusion, our findings showthe potential of reusing traceability associated with developers’ past activities in an RSSE

  • 46.
    BRAMAH-LAWANI, ALEX
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    REQUIREMENTS ELICITATION AND SPECIFICATION FOR HAPTIC INTERFACES FOR VISUALLY IMPAIRED USERS2016Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
  • 47.
    Britto, Ricardo
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Knowledge Classification for Supporting Effort Estimation in Global Software Engineering Projects2015Licentiate thesis, comprehensive summary (Other academic)
    Abstract [en]

    Background: Global Software Engineering (GSE) has become a widely applied operational model for the development of software systems; it can increase profits and decrease time-to-market. However, there are many challenges associated with development of software in a globally distributed fashion. There is evidence that these challenges affect many process related to software development, such as effort estimation. To the best of our knowledge, there are no empirical studies to gather evidence on effort estimation in the GSE context. In addition, there is no common terminology for classifying GSE scenarios focusing on effort estimation.

    Objective: The main objective of this thesis is to support effort estimation in the GSE context by providing a taxonomy to classify the existing knowledge in this field.

    Method: Systematic literature review (to identify and analyze the state of the art), survey (to identify and analyze the state of the practice), systematic mapping (to identify practices to design software engineering taxonomies), and literature survey (to complement the states of the art and practice) were the methods employed in this thesis.

    Results: The results on the states of the art and practice show that the effort estimation techniques employed in the GSE context are the same techniques used in the collocated context. It was also identified that global aspects, e.g. time, geographical and social-cultural distances, are accounted for as cost drivers, although it is not clear how they are measured. As a result of the conducted mapping study, we reported a method that can be used to design new SE taxonomies. The aforementioned results were combined to extend and specialize an existing GSE taxonomy, for suitability for effort estimation. The usage of the specialized GSE effort estimation taxonomy was illustrated by classifying 8 finished GSE projects. The results show that the specialized taxonomy proposed in this thesis is comprehensive enough to classify GSE projects focusing on effort estimation.

    Conclusions: The taxonomy presented in this thesis will help researchers and practitioners to report new research on effort estimation in the GSE context; researchers and practitioners will be able to gather evidence, com- pare new studies and find new gaps in an easier way. The findings from this thesis show that more research must be conducted on effort estimation in the GSE context. For example, the way the cost drivers are measured should be further investigated. It is also necessary to conduct further research to clarify the role and impact of sourcing strategies on the effort estimates’ accuracies. Finally, we believe that it is possible to design an instrument based on the specialized GSE effort estimation taxonomy that helps practitioners to perform the effort estimation process in a way tailored for the specific needs of the GSE context.

  • 48.
    Britto, Ricardo
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Strategizing and Evaluating the Onboarding of Software Developers in Large-Scale Globally Distributed Legacy Projects2017Doctoral thesis, comprehensive summary (Other academic)
    Abstract [en]

    Background: Recruitment and onboarding of software developers are essential steps in software development undertakings. The need for adding new people is often associated with large-scale long-living projects and globally distributed projects. The formers are challenging because they may contain large amounts of legacy (and often complex) code (legacy projects). The latters are challenging, because the inability to find sufficient resources in-house may lead to onboarding people at a distance, and often in many distinct sites. While onboarding is of great importance for companies, there is little research about the challenges and implications associated with onboarding software developers and teams in large-scale globally distributed projects with large amounts of legacy code. Furthermore, no study has proposed any systematic approaches to support the design of onboarding strategies and evaluation of onboarding results in the aforementioned context.

    Objective: The aim of this thesis is two-fold: i) identify the challenges and implications associated with onboarding software developers and teams in large-scale globally distributed legacy projects; and ii) propose solutions to support the design of onboarding strategies and evaluation of onboarding results in large-scale globally distributed legacy projects.

    Method: In this thesis, we employed literature review, case study, and business process modeling. The main case investigated in this thesis is the development of a legacy telecommunication software product in Ericsson.

    Results: The results show that the performance (productivity, autonomy, and lead time) of new developers/teams onboarded in remote locations in large-scale distributed legacy projects is much lower than the performance of mature teams. This suggests that new teams have a considerable performance gap to overcome. Furthermore, we learned that onboarding problems can be amplified by the following challenges: the complexity of the product and technology stack, distance to the main source of product knowledge, lack of team stability, training expectation misalignment, and lack of formalism and control over onboarding strategies employed in different sites of globally distributed projects. To help companies addressing the challenges we identified in this thesis, we propose a process to support the design of onboarding strategies and the evaluation of onboarding results.

    Conclusions: The results show that scale, distribution and complex legacy code may make onboarding more difficult and demand longer periods of time for new developers and teams to achieve high performance. This means that onboarding in large-scale globally distributed legacy projects must be planned well ahead and companies must be prepared to provide extended periods of mentoring by expensive and scarce resources, such as software architects. Failure to foresee and plan such resources may result in effort estimates on one hand, and unavailability of mentors on another, if not planned in advance. The process put forward herein can help companies to deal with the aforementioned problems through more systematic, effective and repeatable onboarding strategies.

  • 49.
    Britto, Ricardo
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Cruzes, Daniella
    SINTEF Digital, NOR.
    Šmite, Darja
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Šāblis, Aivars
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Onboarding Software Developers and Teams in Three Globally Distributed Legacy Projects: A Multi-Case Study2017In: Journal of Software: Evolution and Process, ISSN 2047-7473, E-ISSN 2047-7481Article in journal (Refereed)
    Abstract [en]

    Onboarding is the process of supporting new employees regarding their social and performance adjustment to their new job. Software companies have faced challenges with recruitment and onboarding of new team members and there is no study that investigates it in a holistic way. In this paper, we conducted a multi-case study to investigate the onboarding of software developers/teams, associated challenges, and areas for further improvement in three globally distributed legacy projects. We employed Bauer's model for onboarding to identify the current state of the onboarding strategies employed in each case. We learned that the employed strategies are semi-formalized. Besides, in projects with multiple sites, some functions are executed locally and the onboarding outcomes may be hard to control. We also learned that onboarding in legacy projects is especially challenging and that decisions to distribute such projects across multiple locations shall be approached carefully. In our cases, the challenges to learn legacy code were further amplified by the project scale and the distance to the original sources of knowledge. Finally, we identified practices that can be used by companies to increase the chances of being successful when onboarding software developers and teams in globally distributed legacy projects.

  • 50.
    Britto, Ricardo
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Freitas, Vitor
    Mendes, Emilia
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Usman, Muhammad
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Effort Estimation in Global Software Development: A systematic Literature Review2014In: Proceedings of the 2014 9th IEEE International Conference on Global Software Engineering, 2014, 135-144 p.Conference paper (Refereed)
    Abstract [en]

    Nowadays, software systems are a key factor in the success of many organizations as in most cases they play a central role helping them attain a competitive advantage. However, despite their importance, software systems may be quite costly to develop, so substantially decreasing companies’ profits. In order to tackle this challenge, many organizations look for ways to decrease costs and increase profits by applying new software development approaches, like Global Software Development (GSD). Some aspects of the software project like communication, cooperation and coordination are more chal- lenging in globally distributed than in co-located projects, since language, cultural and time zone differences are factors which can increase the required effort to globally perform a software project. Communication, coordination and cooperation aspects affect directly the effort estimation of a project, which is one of the critical tasks related to the management of a software development project. There are many studies related to effort estimation methods/techniques for co-located projects. However, there are evidences that the co-located approaches do not fit to GSD. So, this paper presents the results of a systematic literature review of effort estimation in the context of GSD, which aimed at help both researchers and practitioners to have a holistic view about the current state of the art regarding effort estimation in the context of GSD. The results suggest that there is room to improve the current state of the art on effort estimation in GSD. 

1234567 1 - 50 of 393
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf