Change search
Refine search result
1234567 1 - 50 of 1165
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Rows per page
  • 5
  • 10
  • 20
  • 50
  • 100
  • 250
Sort
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
Select
The maximal number of hits you can export is 250. When you want to export more records please use the Create feeds function.
  • 1.
    Abdeen, Waleed
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Reducing the Distance Between Requirements Engineering and Verification2022Licentiate thesis, comprehensive summary (Other academic)
    Abstract [en]

    Background Requirements engineering and verification (REV) processes play es-sential roles in software product development. There are physical and non-physicaldistances between entities (actors, artifacts, and activities) in these processes. Cur-rent practices that reduce the distances, such as automated testing and alignmentof document structure and tracing only partially close the above mentioned gap.Objective The aim of this thesis is to investigate solutions w.r.t their abilityto reduce the distances between requirements engineering and verification. Twotechniques that are explored in this thesis are automated testing (model-basedtesting, MBT) and alignment of document structure and tracing (traceability).Method The research methods used in this thesis are systematic mapping, soft-ware requirements mining, case study, literature survey, validation study, and de-sign science.Results MBT and traceability are effective in reducing the distance between re-quirements and verification. However, both activities have some shortcoming thatneeds to be addressed when used for that purpose. Current MBT techniques inthe context of software performance do not attain all the goals of MBT: 1) require-ments validation, 2) checking the testability of requirements, and 3) the generationof an efficient test suite. These goals are essential to reduce the distance. We de-veloped and assessed performance requirements verification and test environmentgeneration approach to tackle these shortcomings. Also, traceability between re-quirements and verification suffers from the low granularity of trace links and doesnot support the verification of all requirements. We propose the use of taxonomictrace links to trace and align the structure of requirements specifications and ver-ification artifacts. The results from the validation study show that the solution isfeasible in practice. However, this comes with challenges that need to be addressed.Conclusion MBT and improved traceability reduce multiple distances betweenactors, artifacts, and activities in the requirements engineering and verificationprocess. MBT is most effective in reducing the distances when the model used isbuilt from the requirements. Traceability is essential in easing access to relevantinformation when needed and should not be seen as an overhead. When creatingtrace links, we need to consider the difference in the abstraction, structure, andtime between the linked artifacts

    Download full text (pdf)
    fulltext
  • 2.
    Abdeen, Waleed
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Taxonomic Trace Links Recommender: Context Aware Hierarchical Classification2023In: CEUR Workshop Proceedings / [ed] Ferrari A., Penzenstadler B., Penzenstadler B., Hadar I., Oyedeji S., Abualhaija S., Vogelsang A., Deshpande G., Rachmann A., Gulden J., Wohlgemuth A., Hess A., Fricker S., Guizzardi R., Horkoff J., Perini A., Susi A., Karras O., Dalpiaz F., Moreira A., Amyot D., Spoletini P., CEUR-WS , 2023, Vol. 3378Conference paper (Refereed)
    Abstract [en]

    In the taxonomic trace links concept, the source and target artifacts are connected through knowledge organization structure (e.g., taxonomy). We introduce in this paper a recommender system that recommends labels to requirements artifacts from domain-specific taxonomy to establish taxonomic trace links. The tool exploits the hierarchical nature of taxonomies and uses requirements text and context information as input to the recommender. © 2023 Copyright for this paper by its authors. Use permitted under Creative Commons License Attribution 4.0 International (CC BY 4.0).

    Download full text (pdf)
    fulltext
  • 3.
    Abdeen, Waleed
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Chen, Xingru
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Model-Based Testing for Performance Requirements: A Systematic Mapping Study and A Sample Study2019Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Model-Based Testing is a method that supports automated test design by using amodel. Although it is adopted in industrial, it is still an open area within performancerequirements. We aim to look into MBT for performance requirements and find out aframework that can model the performance requirements. We conducted a systematicmapping study, after that we conducted a sample study on software requirementsspecifications, then we introduced the Performance Requirements Verification andValidation (PRVV) model and finally, we completed another sample study to seehow the model works in practice. We found that there are many models can beused for performance requirement while the maturity is not enough. MBT can beimplemented in the context of performance, and it has been gaining momentum inrecent years compared to earlier. The PRVV model we developed can verify theperformance requirements and help to generate the test case.

    Download full text (pdf)
    fulltext
  • 4.
    Abdeen, Waleed
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Chen, Xingru
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Unterkalmsteiner, Michael
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    An approach for performance requirements verification and test environments generation2023In: Requirements Engineering, ISSN 0947-3602, E-ISSN 1432-010X, Vol. 28, no 1, p. 117-144Article in journal (Refereed)
    Abstract [en]

    Model-based testing (MBT) is a method that supports the design and execution of test cases by models that specify theintended behaviors of a system under test. While systematic literature reviews on MBT in general exist, the state of the arton modeling and testing performance requirements has seen much less attention. Therefore, we conducted a systematic map-ping study on model-based performance testing. Then, we studied natural language software requirements specificationsin order to understand which and how performance requirements are typically specified. Since none of the identified MBTtechniques supported a major benefit of modeling, namely identifying faults in requirements specifications, we developed thePerformance Requirements verificatiOn and Test EnvironmentS generaTion approach (PRO-TEST). Finally, we evaluatedPRO-TEST on 149 requirements specifications. We found and analyzed 57 primary studies from the systematic mappingstudy and extracted 50 performance requirements models. However, those models don’t achieve the goals of MBT, whichare validating requirements, ensuring their testability, and generating the minimum required test cases. We analyzed 77 Soft-ware Requirements Specification (SRS) documents, extracted 149 performance requirements from those SRS, and illustratethat with PRO-TEST we can model performance requirements, find issues in those requirements and detect missing ones.We detected three not-quantifiable requirements, 43 not-quantified requirements, and 180 underspecified parameters in the149 modeled performance requirements. Furthermore, we generated 96 test environments from those models. By modelingperformance requirements with PRO-TEST, we can identify issues in the requirements related to their ambiguity, measur-ability, and completeness. Additionally, it allows to generate parameters for test environments

    Download full text (pdf)
    fulltext
  • 5.
    Abdeen, Waleed
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Unterkalmsteiner, Michael
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Chirtoglou, Alexandros
    HOCHTIEF ViCon GmbH, DEU.
    Paul Schimanski, Christoph
    HOCHTIEF ViCon GmbH, DEU.
    Goli, Heja
    HOCHTIEF ViCon GmbH, DEU.
    Wnuk, Krzysztof
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Taxonomic Trace Links - Rethinking Traceability and its BenefitsManuscript (preprint) (Other academic)
    Abstract [en]

    Background: Traceability is an important quality of artifacts that are used in knowledge-intensive tasks. When projectbudgets and time pressure are a reality, this leads often to a down-prioritization of creating trace links. Objective:We propose a new idea that uses knowledge organization structures, such as taxonomies, ontologies and thesauri, asan auxiliary artifact to establish trace links. In order to investigate the novelty and feasibility of this idea, we studytraceability in the area of requirements engineering. Method: First, we conduct a literature survey to investigate towhat extent and how auxiliary artifacts have been used in the past for requirements traceability. Then, we conduct avalidation study in industry, testing the idea of taxonomic trace links with realistic artifacts. Results: We have reviewed126 studies that investigate requirements traceability; ninetey-one of them use auxiliary artifacts in the traceabilityprocess. In the validation study, while we have encountered six challenges when classifying requirements with a domain-specific taxonomy, we found that designers and engineers are able to classify design objects comprehensively and reliably.Conclusions: The idea of taxonomic trace links is novel and feasible in practice. However, the identified challenges needto be addressed to allow for an adoption in practice and enable a transfer to software intensive contexts.

  • 6.
    Abdeen, Waleed
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Wnuk, Krzysztof
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Unterkalmsteiner, Michael
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Chirtoglou, Alexandros
    HOCHTIEF ViCon GmbH, Essen, DEU.
    Challenges of Requirements Communication and Digital Assets Verification in Infrastructure ProjectsManuscript (preprint) (Other academic)
    Abstract [en]

    Context: In infrastructure projects with design-build contracts, the supplier delivers digital assets (e.g., 2D or 3Dmodels) as a part of the design deliverable. These digital assets should align with the customer requirements. Poorrequirements communication between the customer and the supplier is one of the reasons for project overrun. To thebest of our knowledge, no study have yet investigated challenges in requirements communication in the customer-supplierinterface.Objective: In this article, we investigated the processes of requirements validation, requirements communication, anddigital assets verification, and explored the challenges associated with these processes.Methods: We conducted two exploratory case studies. We interviewed ten experts working with digital assets fromthree companies working on two infrastructure projects (road and railway).Results: We illustrate the activities, stakeholders, and artifacts involved in requirements communication, requirementsvalidation, and digital asset verification. Furthermore, we identified 14 challenges (in four clusters: requirements quality,trace links, common requirements engineering (RE), and project management) and their causes and consequences inthose processes.Conclusion: Communication between the client and supplier in sub-contracted work in infrastructure projects is oftenindirect. This puts pressure on the quality of the tender documents (mainly requirements documents) that provides themeans for communication and controls the design verification processes. Hence, it is crucial to ensure the quality of therequirements documents by implementing quality assurance techniques

  • 7.
    Abheeshta, Putta
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Comparative Analysis of Software Development Practices across Software Organisations: India and Sweden2016Independent thesis Advanced level (degree of Master (One Year)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Context. System Development Methodologies (SDM’s) have been an area of intensive research in the field of software engineering. Different software organisations adopt different development methodologies and use different development practices. The frequency of usage of development practices and acceptance factors for adoption of development methodology are crucial for software organisations. The factors of acceptance and development practices differ across geographical locations. Many challenges have been presented in the literature with respect to the mismatch of the development practices across organisations while collaborating across organisations in distributed development. There is no considerable amount of research done in context of differences across development practices and acceptance factors for adoption of a particular development methodology. Objectives. The primary objectives of the research are to find out a) differences in (i) practice usage (ii) acceptance factors such as organisational, social and cultural b) explore the reasons for the differences and also investigate consequences of such differences while collaborating, across organisations located in India and Sweden. Methods. A literature review was conducted by searching in scientific databases for identifying common agile and plan-driven development practices and acceptance theories for development methodologies. Survey was conducted across organisations located in India and Sweden to find out the usage frequency of development practices and acceptance factors. Ten interviews were conducted to investigate, reasons for differences and consequences of differences from the software practitioners from organisations located in India and Sweden. Literature evidences were used to support the results collected from interviews. Results. From the survey, organisations in India have adopted a higher frequency of plan driven practices when compared to Sweden and agile practices were adopted at higher frequency in Sweden when compared to India. The number of organisations adopting "pure agile" methodologies have been significantly higher in Sweden. There was significant differences were found across the acceptance factors such as cultural, organisational, image and career factors between India and Sweden. The factors such as cultural, social, human, business and organisational factors are responsible for such differences across development practices and acceptance factors. Challenges related to communication, coordination and control were found due to the differences, while collaborating between Indian and Sweden sites. Conclusions. The study signifies the importance of identifying the frequency of development practices and also the acceptance factors responsible for adoption of development methodologies in the software organisations. The mismatch between these practices will led to various challenges. The study draws insights into various non-technical factors such as cultural, human, organisational, business and social while collaborating between organisations. Variations across these factors will lead to many coordination, communication and control issues. Keywords: Development Practices, Agile Development, Plan Driven Development, Acceptance Factors, Global Software Development.

    Download full text (pdf)
    fulltext
  • 8.
    Abrahão, Silvia
    et al.
    Universitat Politècnica de València, ESP.
    Mendez, Daniel
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Message from the Artifact Evaluation Chairs of ICSE 20212021In: Proceedings - International Conference on Software Engineering, IEEE Computer Society , 2021Conference paper (Other academic)
  • 9.
    Abualhaija, Sallam
    et al.
    University of Luxembourg, LUX.
    Fucci, Davide
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Dalpiaz, Fabiano
    Utrecht University, NLD.
    Franch, Xavier
    Universitat Politècnica de Catalunya, ESP.
    3rd workshop on natural language processing for requirements engineering (NLP4RE'20)2020In: CEUR Workshop Proceedings / [ed] Sabetzadeh M.,Vogelsang A.,Abualhaija S.,Borg M.,Dalpiaz F.,Daneva M.,Fernandez N.C.,Franch X.,Fucci D.,Gervasi V.,Groen E.,Guizzardi R.,Herrmann A.,Horkoff J.,Mich L.,Perini A.,Susi A., CEUR-WS , 2020, Vol. 2584Conference paper (Refereed)
    Download full text (pdf)
    3rd workshop on natural language processing for requirements engineering (NLP4RE'20)
  • 10.
    Adigun, Jubril Gbolahan
    et al.
    University of Innsbruck, DEU.
    Camilli, Matteo
    Free University of Bozen–Bolzano, ITA.
    Felderer, Michael
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Giusti, Andrea
    Fraunhofer Italia Research, ITA.
    Matt, Dominik T.
    Free University of Bozen–Bolzano, ITA.
    Perini, Anna
    University of Trento, ITA.
    Russo, Barbara
    Free University of Bozen–Bolzano, ITA.
    Susi, Angelo
    Fondazione Bruno Kessler, ITA.
    Collaborative Artificial Intelligence Needs Stronger Assurances Driven by Risks2022In: Computer, ISSN 0018-9162, E-ISSN 1558-0814, Vol. 55, no 3, p. 52-63Article in journal (Refereed)
    Abstract [en]

    Collaborative artificial intelligence systems (CAISs) aim to work with humans in a shared space to achieve a common goal, but this can pose hazards that could harm human beings. We identify emerging problems in this context and report our vision of and progress toward a risk-driven assurance process for CAISs.

  • 11. Afzal, Wasif
    et al.
    Ghazi, Ahmad Nauman
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Itkonen, Juha
    Torkar, Richard
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Andrews, Anneliese
    Bhatti, Khurram
    An experiment on the effectiveness and efficiency of exploratory testing2015In: Empirical Software Engineering, ISSN 1382-3256, Vol. 20, no 3, p. 844-878Article in journal (Refereed)
    Abstract [en]

    The exploratory testing (ET) approach is commonly applied in industry, but lacks scientific research. The scientific community needs quantitative results on the performance of ET taken from realistic experimental settings. The objective of this paper is to quantify the effectiveness and efficiency of ET vs. testing with documented test cases (test case based testing, TCT). We performed four controlled experiments where a total of 24 practitioners and 46 students performed manual functional testing using ET and TCT. We measured the number of identified defects in the 90-minute testing sessions, the detection difficulty, severity and types of the detected defects, and the number of false defect reports. The results show that ET found a significantly greater number of defects. ET also found significantly more defects of varying levels of difficulty, types and severity levels. However, the two testing approaches did not differ significantly in terms of the number of false defect reports submitted. We conclude that ET was more efficient than TCT in our experiment. ET was also more effective than TCT when detection difficulty, type of defects and severity levels are considered. The two approaches are comparable when it comes to the number of false defect reports submitted.

    Download full text (pdf)
    fulltext
  • 12. Afzal, Wasif
    et al.
    Torkar, Richard
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Towards benchmarking feature subset selection methods for software fault prediction2016In: Studies in Computational Intelligence, Springer, 2016, 617, Vol. 617, p. 33-58Chapter in book (Refereed)
    Abstract [en]

    Despite the general acceptance that software engineering datasets often contain noisy, irrelevant or redundant variables, very few benchmark studies of feature subset selection (FSS) methods on real-life data from software projects have been conducted. This paper provides an empirical comparison of state-of-the-art FSS methods: information gain attribute ranking (IG); Relief (RLF); principal component analysis (PCA); correlation-based feature selection (CFS); consistencybased subset evaluation (CNS); wrapper subset evaluation (WRP); and an evolutionary computation method, genetic programming (GP), on five fault prediction datasets from the PROMISE data repository. For all the datasets, the area under the receiver operating characteristic curve—the AUC value averaged over 10-fold cross-validation runs—was calculated for each FSS method-dataset combination before and after FSS. Two diverse learning algorithms, C4.5 and naïve Bayes (NB) are used to test the attribute sets given by each FSS method. The results show that although there are no statistically significant differences between the AUC values for the different FSS methods for both C4.5 and NB, a smaller set of FSS methods (IG, RLF, GP) consistently select fewer attributes without degrading classification accuracy. We conclude that in general, FSS is beneficial as it helps improve classification accuracy of NB and C4.5. There is no single best FSS method for all datasets but IG, RLF and GP consistently select fewer attributes without degrading classification accuracy within statistically significant boundaries. © Springer International Publishing Switzerland 2016.

  • 13.
    Afzal, Wasif
    et al.
    Blekinge Institute of Technology.
    Torkar, Richard
    Blekinge Institute of Technology.
    Feldt, Robert
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Gorschek, Tony
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Genetic programming for cross-release fault count predictions in large and complex software projects2010In: Evolutionary Computation and Optimization Algorithms in Software Engineering: Applications and Techniques / [ed] Chis, Monica, IGI Global, Hershey, USA , 2010Chapter in book (Refereed)
    Abstract [en]

    Software fault prediction can play an important role in ensuring software quality through efficient resource allocation. This could, in turn, reduce the potentially high consequential costs due to faults. Predicting faults might be even more important with the emergence of short-timed and multiple software releases aimed at quick delivery of functionality. Previous research in software fault prediction has indicated that there is a need i) to improve the validity of results by having comparisons among number of data sets from a variety of software, ii) to use appropriate model evaluation measures and iii) to use statistical testing procedures. Moreover, cross-release prediction of faults has not yet achieved sufficient attention in the literature. In an attempt to address these concerns, this paper compares the quantitative and qualitative attributes of 7 traditional and machine-learning techniques for modeling the cross-release prediction of fault count data. The comparison is done using extensive data sets gathered from a total of 7 multi-release open-source and industrial software projects. These software projects together have several years of development and are from diverse application areas, ranging from a web browser to a robotic controller software. Our quantitative analysis suggests that genetic programming (GP) tends to have better consistency in terms of goodness of fit and accuracy across majority of data sets. It also has comparatively less model bias. Qualitatively, ease of configuration and complexity are less strong points for GP even though it shows generality and gives transparent models. Artificial neural networks did not perform as well as expected while linear regression gave average predictions in terms of goodness of fit and accuracy. Support vector machine regression and traditional software reliability growth models performed below average on most of the quantitative evaluation criteria while remained on average for most of the qualitative measures.

  • 14.
    Afzal, Wasif
    et al.
    Blekinge Institute of Technology.
    Torkar, Richard
    Blekinge Institute of Technology.
    Feldt, Robert
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Wikstrand, Greger
    KnowIT YAHM Sweden AB, SWE.
    Search-based prediction of fault-slip-through in large software projects2010In: Proceedings - 2nd International Symposium on Search Based Software Engineering, SSBSE 2010, IEEE , 2010, p. 79-88Conference paper (Refereed)
    Abstract [en]

    A large percentage of the cost of rework can be avoided by finding more faults earlier in a software testing process. Therefore, determination of which software testing phases to focus improvements work on, has considerable industrial interest. This paper evaluates the use of five different techniques, namely particle swarm optimization based artificial neural networks (PSO-ANN), artificial immune recognition systems (AIRS), gene expression programming (GEP), genetic programming (GP) and multiple regression (MR), for predicting the number of faults slipping through unit, function, integration and system testing phases. The objective is to quantify improvement potential in different testing phases by striving towards finding the right faults in the right phase. We have conducted an empirical study of two large projects from a telecommunication company developing mobile platforms and wireless semiconductors. The results are compared using simple residuals, goodness of fit and absolute relative error measures. They indicate that the four search-based techniques (PSO-ANN, AIRS, GEP, GP) perform better than multiple regression for predicting the fault-slip-through for each of the four testing phases. At the unit and function testing phases, AIRS and PSO-ANN performed better while GP performed better at integration and system testing phases. The study concludes that a variety of search-based techniques are applicable for predicting the improvement potential in different testing phases with GP showing more consistent performance across two of the four test phases.

  • 15.
    Ahmed, Usman
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Cybercrime: A case study of the Menace and Consequences Internet Manipulators2020Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
  • 16.
    Ahnell, Fredrik
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Noring, Sebastan
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Inlärningsverktyg för JavaScript - En jämförelse avseende inlärning av grundläggande kunskaper på egen hand2021Independent thesis Basic level (degree of Bachelor), 10 credits / 15 HE creditsStudent thesis
  • 17.
    Ahrneteg, Jakob
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Kulenovic, Dean
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Semantic Segmentation of Historical Document Images Using Recurrent Neural Networks2019Independent thesis Advanced level (professional degree), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Background. This thesis focuses on the task of historical document semantic segmentation with recurrent neural networks. Document semantic segmentation involves the segmentation of a page into different meaningful regions and is an important prerequisite step of automated document analysis and digitisation with optical character recognition. At the time of writing, convolutional neural network based solutions are the state-of-the-art for analyzing document images while the use of recurrent neural networks in document semantic segmentation has not yet been studied. Considering the nature of a recurrent neural network and the recent success of recurrent neural networks in document image binarization, it should be possible to employ a recurrent neural network for document semantic segmentation and further achieve high performance results.

    Objectives. The main objective of this thesis is to investigate if recurrent neural networks are a viable alternative to convolutional neural networks in document semantic segmentation. By using a combination of a convolutional neural network and a recurrent neural network, another objective is also to determine if the performance of the combination can improve upon the existing case of only using the recurrent neural network.

    Methods. To investigate the impact of recurrent neural networks in document semantic segmentation, three different recurrent neural network architectures are implemented and trained while their performance are further evaluated with Intersection over Union. Afterwards their segmentation result are compared to a convolutional neural network. By performing pre-processing on training images and multi-class labeling, prediction images are ultimately produced by the employed models.

    Results. The results from the gathered performance data shows a 2.7% performance difference between the best recurrent neural network model and the convolutional neural network. Notably, it can be observed that this recurrent neural network model has a more consistent performance than the convolutional neural network but comparable performance results overall. For the other recurrent neural network architectures lower performance results are observed which is connected to the complexity of these models. Furthermore, by analyzing the performance results of a model using a combination of a convolutional neural network and a recurrent neural network, it can be noticed that the combination performs significantly better with a 4.9% performance increase compared to the case with only using the recurrent neural network.

    Conclusions. This thesis concludes that recurrent neural networks are likely a viable alternative to convolutional neural networks in document semantic segmentation but that further investigation is required. Furthermore, by combining a convolutional neural network with a recurrent neural network it is concluded that the performance of a recurrent neural network model is significantly increased.

    Download full text (pdf)
    fulltext
  • 18.
    Aivars, Sablis
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Benefits of transactive memory systems in large-scale development2016Independent thesis Advanced level (degree of Master (Two Years)), 80 credits / 120 HE creditsStudent thesis
    Abstract [en]

    Context. Large-scale software development projects are those consisting of a large number of teams, maybe even spread across multiple locations, and working on large and complex software tasks. That means that neither a team member individually nor an entire team holds all the knowledge about the software being developed and teams have to communicate and coordinate their knowledge. Therefore, teams and team members in large-scale software development projects must acquire and manage expertise as one of the critical resources for high-quality work.

    Objectives. We aim at understanding whether software teams in different contexts develop transactive memory systems (TMS) and whether well-developed TMS leads to performance benefits as suggested by research conducted in other knowledge-intensive disciplines. Because multiple factors may influence the development of TMS, based on related TMS literature we also suggest to focus on task allocation strategies, task characteristics and management decisions regarding the project structure, team structure and team composition.

    Methods. We use the data from two large-scale distributed development companies and 9 teams, including quantitative data collected through a survey and qualitative data from interviews to measure transactive memory systems and their role in determining team performance. We measure teams’ TMS with a latent variable model. Finally, we use focus group interviews to analyze different organizational practices with respect to team management, as a set of decisions based on two aspects: team structure and composition, and task allocation.

    Results. Data from two companies and 9 teams are analyzed and the positive influence of well-developed TMS on team performance is found. We found that in large-scale software development, teams need not only well-developed team’s internal TMS, but also have well- developed and effective team’s external TMS. Furthermore, we identified practices that help of hinder development of TMS in large-scale projects.

    Conclusions. Our findings suggest that teams working in large-scale software development can achieve performance benefits if transactive memory practices within the team are supported with networking practices in the organization. 

    Download full text (pdf)
    fulltext
  • 19.
    Akkineni, Srinivasu
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    The impact of RE process factors and organizational factors during alignment between RE and V&V: Systematic Literature Review and Survey2015Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Context: Requirements engineering (RE) and Verification and validation (V&V) areas are treated to be integrated and assure successful development of the software project. Therefore, activation of both competences in the early stages of the project will support products in meeting the customer expectation regarding the quality and functionality. However, this quality can be achieved by aligning RE and V&V. There are different practices such as requirements, verification, validation, control, tool etc. that are followed by organizations for alignment and to address different challenges faced during the alignment between RE and V&V. However, there is a requisite for studies to understand the alignment practices, challenges and factors, which can enable successful alignment between RE and V&V.

    Objectives: In this study, an exploratory investigation is carried out to know the impact of factors i.e. RE process and organizational factors during the alignment between RE and V&V. The main objectives of this study are:

    1. To find the list of RE practices that facilitate alignment between RE and V&V.
    2. To categorize RE practices with respect to their requirement phases.
    3. To find the list of RE process and organizational factors that influence alignment between RE and V&V besides their impact.
    4. To identify the challenges that are faced during the alignment between RE and V&V.
    5. To obtain list of challenges that are addressed by RE practices during the alignment between RE and V&V.

    Methods: In this study Systematic Literature Review (SLR) is conducted using snowballing procedure to identify the relevant information about RE practices, challenges, RE process factors and organizational factors. The studies were captured from Engineering Village database. Rigor and relevance analysis is performed to assess the quality of the studies obtained through SLR. Further, a questionnaire intended for industrial survey was prepared from the gathered literature and distributed to practitioners from the software industry in order to collect empirical information about this study. Thereafter, data obtained from industrial survey was analyzed using statistical analysis and chi-square significance test.

    Results: 20 studies were identified through SLR, which are relevant to this study. After analyzing the obtained studies, the list of RE process factors, organizational factors, challenges and RE practices during alignment between RE and V&V are gathered. Thereupon, an industrial survey is conducted from the obtained literature, which has obtained 48 responses. Alignment between RE and V&V possess an impact of RE process factors and organizational factors and this is also mentioned by the respondents of the survey. Moreover, this study finds an additional RE process factors and organizational factors during the alignment between RE and V&V, besides their impact. Another contribution is, addressing the unaddressed challenges by RE practices obtained through the literature. Additionally, validation of categorized RE practices with respect to their requirement phases is carried out.

    Conclusions: To conclude, the obtained results from this study will benefit practitioners for capturing more insight towards the alignment between RE and V&V. This study identified the impact of RE process factors and organizational factors during the alignment between RE and V&V along with the importance of challenges faced during the alignment between RE and V&V. This study also addressed the unaddressed challenges by RE practices obtained through literature. Respondents of the survey believe that many RE process and organizational factors have negative impact on the alignment between RE and V&V based on the size of an organization. In addition to this, validation of results for applying RE practices at different requirement phases is toted through survey. Practitioners can identify the benefits from this research and researchers can extend this study to remaining alignment practices.

    Download full text (pdf)
    fulltext
  • 20.
    Al burhan, Mohammad
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Differences between DockerizedContainers and Virtual Machines: A performance analysis for hosting web-applications in a virtualized environment2020Independent thesis Basic level (degree of Bachelor), 10 credits / 15 HE creditsStudent thesis
    Abstract [en]

    This is a bachelor thesis regarding the performance differences for hosting a web-application in a virtualized environment. We compare virtual machines against containers and observe their resource usage in categories such as CPU, RAM and disk storage in idle state and perform a range of computation experiments in which response times are measured from a series of request intervals. Response times are measured with the help of a web-application created in Python. The experiments are performed under both normal and stressed conditions to give a better indication in to which virtualized environment outperform the other during different scenarios.

    The results show that virtual machines and containers remained close to each other in response times during the first request interval, but the containers outperformed virtual machines in terms of resource usages while in idle state, they had less of a burden on the host computer. They were also significantly more rapid in terms of response times. This is also most noticeable during stressed conditions in which the virtual machine almost doubled its sluggishness.

    Download full text (pdf)
    Differences between Dockerized Containers and Virtual Machines A performance analysis for hosting web-applications in a virtualized environment
  • 21.
    AL Halbouni, Hadi
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Hansen, Frank
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    A Scenario-Based evaluation of Game Architecture2020Independent thesis Basic level (degree of Bachelor), 10 credits / 15 HE creditsStudent thesis
    Abstract [en]

    When developers or organizations need to develop a game, simulation or a similar project, they phase the question of whether or not to use a game engine as well as the question on which one to use. Are all game engines the same or does the architecture change and how is the game design different between various game engines? The objective of this thesis is to research these questions as well as giving a concrete understanding of the impact of picking one engine over the other and how each engine influences the way games are developed and answer some more specific questions regarding architecture and usability. 

    A project was designed with the goal of developing a game. This game was developed by two separate teams over a period of 6 weeks, using two different game engines. The development was split into separate iterations done simultaneously between the teams and questionnaires were filled in to gather data. The game engines used for projects had similarities but also things which were different. Each engine offered ways to speed up development by allowing the developer to reuse and distribute changes among objects to reduce work. The differences caused one engine’s code architecture to be more complex than the other while allowing a better code structure as well as adding more time to learn how the engine handles certain things such as collisions.

    In conclusion, there is an importance to properly evaluating different game engines depending on the project a developer or organization is creating, not evaluating this properly will impact development speed and project complexity. Even though each engine has their differences, there is no superior game engine as it all depends on the project being developed. The game developed for this project was only touching on certain areas related to 2D games.

    Download full text (pdf)
    fulltext
  • 22.
    Alahyari, Hiva
    et al.
    Chalmers; Göteborgs Universitet, SWE.
    Berntsson Svensson, Richard
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Gorschek, Tony
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    A study of value in agile software development organizations2017In: Journal of Systems and Software, ISSN 0164-1212, E-ISSN 1873-1228, Vol. 125, p. 271-288Article in journal (Refereed)
    Abstract [en]

    The Agile manifesto focuses on the delivery of valuable software. In Lean, the principles emphasise value, where every activity that does not add value is seen as waste. Despite the strong focus on value, and that the primary critical success factor for software intensive product development lies in the value domain, no empirical study has investigated specifically what value is. This paper presents an empirical study that investigates how value is interpreted and prioritised, and how value is assured and measured. Data was collected through semi-structured interviews with 23 participants from 14 agile software development organisations. The contribution of this study is fourfold. First, it examines how value is perceived amongst agile software development organisations. Second, it compares the perceptions and priorities of the perceived values by domains and roles. Third, it includes an examination of what practices are used to achieve value in industry, and what hinders the achievement of value. Fourth, it characterises what measurements are used to assure, and evaluate value-creation activities.

  • 23.
    Alahyari, Hiva
    et al.
    Chalmers, SWE.
    Gorschek, Tony
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Berntsson Svensson, Richard
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    An exploratory study of waste in software development organizations using agile or lean approaches: A multiple case study at 14 organizations2019In: Information and Software Technology, ISSN 0950-5849, E-ISSN 1873-6025, Vol. 107, p. 78-94Article in journal (Refereed)
    Abstract [en]

    Context: The principal focus of lean is the identification and elimination of waste from the process with respect to maximizing customer value. Similarly, the purpose of agile is to maximize customer value and minimize unnecessary work and time delays. In both cases the concept of waste is important. Through an empirical study, we explore how waste is approached in agile software development organizations. Objective: This paper explores the concept of waste in agile/lean software development organizations and how it is defined, used, prioritized, reduced, or eliminated in practice Method: The data were collected using semi-structured open-interviews. 23 practitioners from 14 embedded software development organizations were interviewed representing two core roles in each organization. Results: Various wastes, categorized in 10 different categories, were identified by the respondents. From the mentioned wastes, not all were necessarily waste per se but could be symptoms caused by wastes. From the seven wastes of lean, Task-switching was ranked as the most important, and Extra-features, as the least important wastes according to the respondents’ opinion. However, most companies do not have their own or use an established definition of waste, more importantly, very few actively identify or try to eliminate waste in their organizations beyond local initiatives on project level. Conclusion: In order to identify, recognize and eliminate waste, a common understanding, and a joint and holistic view of the concept is needed. It is also important to optimize the whole organization and the whole product, as waste on one level can be important on another, thus sub-optimization should be avoided. Furthermore, to achieve a sustainable and effective waste handling, both the short-term and the long-term perspectives need to be considered. © 2018 Elsevier B.V.

  • 24. Alegroth, Emil
    et al.
    Feldt, Robert
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Kolstrom, Pirjo
    Maintenance of automated test suites in industry: An empirical study on Visual GUI Testing2016In: Information and Software Technology, ISSN 0950-5849, E-ISSN 1873-6025, Vol. 73, p. 66-80Article in journal (Refereed)
    Abstract [en]

    Context: Verification and validation (V&V) activities make up 20-50% of the total development costs of a software system in practice. Test automation is proposed to lower these V&V costs but available research only provides limited empirical data from industrial practice about the maintenance costs of automated tests and what factors affect these costs. In particular, these costs and factors are unknown for automated GUI-based testing. Objective: This paper addresses this lack of knowledge through analysis of the costs and factors associated with the maintenance of automated GUI-based tests in industrial practice. Method: An empirical study at two companies, Siemens and Saab, is reported where interviews about, and empirical work with, Visual GUI Testing is performed to acquire data about the technique's maintenance costs and feasibility. Results: 13 factors are observed that affect maintenance, e.g. tester knowledge/experience and test case complexity. Further, statistical analysis shows that developing new test scripts is costlier than maintenance but also that frequent maintenance is less costly than infrequent, big bang maintenance. In addition a cost model, based on previous work, is presented that estimates the time to positive return on investment (ROI) of test automation compared to manual testing. Conclusions: It is concluded that test automation can lower overall software development costs of a project while also having positive effects on software quality. However, maintenance costs can still be considerable and the less time a company currently spends on manual testing, the more time is required before positive, economic, ROI is reached after automation. (C) 2016 Elsevier B.V. All rights reserved.

  • 25. Alegroth, Emil
    et al.
    Feldt, Robert
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Ryrholm, Lisa
    Visual GUI testing in practice: challenges, problems and limitations2015In: Empirical Software Engineering, ISSN 1382-3256, E-ISSN 1573-7616, Vol. 20, no 3, p. 694-744Article in journal (Refereed)
    Abstract [en]

    In today’s software development industry, high-level tests such as Graphical User Interface (GUI) based system and acceptance tests are mostly performed with manual practices that are often costly, tedious and error prone. Test automation has been proposed to solve these problems but most automation techniques approach testing from a lower level of system abstraction. Their suitability for high-level tests has therefore been questioned. High-level test automation techniques such as Record and Replay exist, but studies suggest that these techniques suffer from limitations, e.g. sensitivity to GUI layout or code changes, system implementation dependencies, etc. Visual GUI Testing (VGT) is an emerging technique in industrial practice with perceived higher flexi- bility and robustness to certain GUI changes than previous high-level (GUI) test automation techniques. The core of VGT is image recognition which is applied to analyze and interact with the bitmap layer of a system’s front end. By coupling image recognition with test scripts, VGT tools can emulate end user behavior on almost any GUI-based system, regardless of implementation language, operating system or platform. However, VGT is not without its own challenges, problems and limitations (CPLs) but, like for many other automated test techniques, there is a lack of empirically-based knowledge of these CPLs and how they impact industrial applicability. Crucially, there is also a lack of information on the cost of applying this type of test automation in industry. This manuscript reports an empirical, multi-unit case study performed at two Swedish companies that develop safety-critical software. It studies their transition from manual system test cases into tests auto- mated with VGT. In total, four different test suites that together include more than 300 high-level system test cases were automated for two multi-million lines of code systems. The results show that the transitioned test cases could find defects in the tested systems and that all applicable test cases could be automated. However, during these transition projects a number of hurdles had to be addressed; a total of 58 different CPLs were identified and then categorized into 26 types. We present these CPL types and an analysis of the implications for the transition to and use of VGT in industrial software development practice. In addition, four high-level solutions are presented that were identified during the study, which would address about half of the identified CPLs. Furthermore, collected metrics on cost and return on investment of the VGT transition are reported together with information about the VGT suites’ defect finding ability. Nine of the identified defects are reported, 5 of which were unknown to testers with extensive experience from using the manual test suites. The main conclusion from this study is that even though there are many challenges related to the transition and usage of VGT, the technique is still valuable, flexible and considered cost-effective by the industrial practitioners. The presented CPLs also provide decision support in the use and advancement of VGT and potentially other automated testing techniques similar to VGT, e.g. Record and Replay.

    Download full text (pdf)
    fulltext
  • 26.
    Alexander, Granhof
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Jakob, Eriksson
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Improving the User Experience in Data Visualization Web Applications2021Independent thesis Basic level (degree of Bachelor), 10 credits / 15 HE creditsStudent thesis
    Abstract [en]

    This paper is a literature study with an additional empirical approach to research how to improve user experience in data visualization web applications. This research has been conducted in collaboration with Caretia AB to improve their current data visualization tool. The research studies previous research on the topics of UI design, user experience, visual complexity and user interaction in the attempt to discover what areas of design and intuitivity that improves the user experiences in these kinds of tools. The findings were then tested together with Caretia through a proof-of-concept prototype application which was implemented with said findings. The conclusion of the results is that mapping ontology groups and prior experience as well as reducing visual overload are effective ways of improving intuitivity and user experience.

    Download full text (pdf)
    fulltext
  • 27.
    Alexandre, Rui Carlos Josino
    et al.
    UNIFESP, Brazil.
    Martins, Luiz Eduardo Galvao
    UNIFESP, Brazil.
    Gorschek, Tony
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Cybersecurity Risk Assessment for Medium-Risk Drones: A Systematic Literature Review2023In: IEEE Aerospace and Electronic Systems Magazine, ISSN 0885-8985, E-ISSN 1557-959X, Vol. 38, no 6, p. 28-43Article, review/survey (Refereed)
    Abstract [en]

    The increased demand for Remotely Piloted Aircraft Systems (RPAS) in Beyond Visual Line-Of-Sight (BVLOS) operations gives rise to a set of concerns regarding cybersecurity that, if not addressed, can lead to the unsafe operation of RPASs. To assist the airworthiness evaluation that is performed by Civil Aviation Authorities (CAAs), we identified several processes that are used to evaluate the cybersecurity of RPAS. We conducted a Systematic Literature Review (SLR) by selecting 30 papers (out of 211 screened) that were published during the past five years. The results of our SLR indicate the importance of cybersecurity to the safe operation of RPAS. It is evident that there is a lack of a systematic process to enable a cybersecurity review of RPAS. We observe that common cyber threats to RPAS are related to jamming, spoofing, and DOS/DDOS (Denial of Service/Distributed Denial of Service). Processes relevant to the assessment of RPAS cybersecurity exist, however they differ in safety concerns from our perspective. In addition, with only one exception, the methods have not been used, and/or the use has not been reported as pertaining to industrial application. The most frequently cited vulnerabilities are those related to GPS and datalinks. 

  • 28.
    Ali, Nauman Bin
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Is effectiveness sufficient to choose an intervention?: Considering resource use in empirical software engineering2016In: Proceedings of the 10th ACM/IEEE International Symposium on Empirical Software Engineering and Measurement, ESEM 2016, Ciudad Real, Spain, September 8-9, 2016, 2016, article id 54Conference paper (Refereed)
  • 29.
    Ali, Nauman bin
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Edison, Henry
    Lero - The Irish Software Engineering Research Centre, IRL.
    Torkar, Richard
    Chalmers and University of Gothenburg, SWE.
    The impact of a proposal for innovation measurement in the software industry2020In: International Symposium on Empirical Software Engineering and Measurement, IEEE Computer Society, 2020, article id 3422163Conference paper (Refereed)
    Abstract [en]

    Background: Measuring an organization's capability to innovate and assessing its innovation output and performance is a challenging task. Previously, a comprehensive model and a suite of measurements to support this task were proposed. Aims: In the current paper, seven years since the publication of the paper titled Towards innovation measurement in the software industry, we have reflected on the impact of thework. Method:We have mainly relied on quantitative and qualitative analysis of the citations of the paper using an established classification schema. Results: We found that the article has had a significant scientific impact (indicated by the number of citations), i.e., (1) cited in literature from both software engineering and other fields, (2) cited in grey literature and peerreviewed literature, and (3) substantial citations in literature not published in the English language. However, we consider a majority of the citations in the peer-reviewed literature (75 out of 116) as neutral, i.e., they have not used the innovation measurement paper in any substantial way. All in all, 38 out of 116 have used, modified or based their work on the definitions, measurements or the model proposed in the article. This analysis revealed a significant weakness of the citing work, i.e., among the citing papers, we found only two explicit comparisons to the innovation measurement proposal, and we found no papers that identify weaknesses of said proposal. Conclusions: This work highlights the need for being cautious of relying solely on the number of citations for understanding impact, and the need for further improving and supporting the peer-review process to identify unwarranted citations in papers. © 2020 IEEE Computer Society. All rights reserved.

    Download full text (pdf)
    fulltext
  • 30.
    Ali, Nauman bin
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Engström, Emelie
    Lund University, SWE.
    Taromirad, Masoumeh
    Halmstad University, SWE.
    Mousavi, Muhammad Raza
    Halmstad University, SWE.
    Minhas, Nasir Mehmood
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Helgesson, Daniel
    Lund University, SWE.
    Kunze, Sebastian
    Halmstad University, SWE.
    Varshosaz, Mahsa
    Halmstad University, SWE.
    On the search for industry-relevant regression testing research2019In: Empirical Software Engineering, ISSN 1382-3256, E-ISSN 1573-7616, Vol. 24, no 4, p. 2020-2055Article in journal (Refereed)
    Abstract [en]

    Regression testing is a means to assure that a change in the software, or

    its execution environment, does not introduce new defects. It involves the expensive

    undertaking of rerunning test cases. Several techniques have been proposed

    to reduce the number of test cases to execute in regression testing, however, there

    is no research on how to assess industrial relevance and applicability of such techniques.

    We conducted a systematic literature review with the following two goals:

    rstly, to enable researchers to design and present regression testing research with

    a focus on industrial relevance and applicability and secondly, to facilitate the industrial

    adoption of such research by addressing the attributes of concern from the

    practitioners' perspective. Using a reference-based search approach, we identied

    1068 papers on regression testing. We then reduced the scope to only include papers

    with explicit discussions about relevance and applicability (i.e. mainly studies

    involving industrial stakeholders). Uniquely in this literature review, practitioners

    were consulted at several steps to increase the likelihood of achieving our aim of

    identifying factors important for relevance and applicability. We have summarised

    the results of these consultations and an analysis of the literature in three taxonomies,

    which capture aspects of industrial-relevance regarding the regression

    testing techniques. Based on these taxonomies, we mapped 38 papers reporting

    the evaluation of 26 regression testing techniques in industrial settings.

    Download full text (pdf)
    fulltext
  • 31.
    Ali, Nauman bin
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Petersen, Kai
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    FLOW-assisted value stream mapping in the early phases of large-scale software development2016In: Journal of Systems and Software, ISSN 0164-1212, E-ISSN 1873-1228, Vol. 111, p. 213-227Article in journal (Refereed)
    Abstract [en]

    Value stream mapping (VSM) has been successfully applied in the context of software process improvement. However, its current adaptations from Lean manufacturing focus mostly on the flow of artifacts and have taken no account of the essential information flows in software development. A solution specifically targeted toward information flow elicitation and modeling is FLOW. This paper aims to propose and evaluate the combination of VSM and FLOW to identify and alleviate information and communication related challenges in large-scale software development. Using case study research, FLOW-assisted VSM was used for a large product at Ericsson AB, Sweden. Both the process and the outcome of FLOW-assisted VSM have been evaluated from the practitioners’ perspective. It was noted that FLOW helped to systematically identify challenges and improvements related to information flow. Practitioners responded favorably to the use of VSM and FLOW, acknowledged the realistic nature and impact on the improvement on software quality, and found the overview of the entire process using the FLOW notation very useful. The combination of FLOW and VSM presented in this study was successful in systematically uncovering issues and characterizing their solutions, indicating their practical usefulness for waste removal with a focus on information flow related issues.

    Download full text (pdf)
    fulltext
  • 32.
    Ali, Nauman bin
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Tanveer, Binish
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    A Comparison of Citation Sources for Reference and Citation-Based Search in Systematic Literature Reviews2022In: e-Informatica Software Engineering Journal, ISSN 1897-7979, E-ISSN 2084-4840, Vol. 16, no 1, article id 220106Article, review/survey (Refereed)
    Abstract [en]

    Context: In software engineering, snowball sampling has been used as a supplementary and primary search strategy. The current guidelines recommend using Google Scholar (GS) for snowball sampling. However, the use of GS presents several challenges when using it as a source for citations and references. Objective: To compare the effectiveness and usefulness of two leading citation databases (GS and Scopus) for use in snowball sampling search. Method: We relied on a published study that has used snowball sampling as a search strategy and GS as the citation source. We used its primary studies to compute precision and recall for Scopus. Results: In this particular case, Scopus was highly effective with 95% recall and had better precision of 5.1% compared to GS’s 2.8%. Moreover, Scopus found nine additional relevant papers. On average, one would read approximately 15 extra papers in GS than Scopus to identify one additional relevant paper. Furthermore, Scopus supports batch downloading of both citations and papers’ references, has better quality metadata, and does better source filtering. Conclusion: This study suggests that Scopus seems to be more effective and useful for snowball sampling than GS for systematic secondary studies attempting to identify peer-reviewed literature. EVIE © 2022 The Authors.

    Download full text (pdf)
    fulltext
  • 33.
    Ali, Nauman bin
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Usman, Muhammad
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    A critical appraisal tool for systematic literature reviews in software engineering2019In: Information and Software Technology, ISSN 0950-5849, E-ISSN 1873-6025, Vol. 112, p. 48-50Article, review/survey (Refereed)
    Abstract [en]

    Context: Methodological research on systematic literature reviews (SLRs)in Software Engineering (SE)has so far focused on developing and evaluating guidelines for conducting systematic reviews. However, the support for quality assessment of completed SLRs has not received the same level of attention. Objective: To raise awareness of the need for a critical appraisal tool (CAT)for assessing the quality of SLRs in SE. To initiate a community-based effort towards the development of such a tool. Method: We reviewed the literature on the quality assessment of SLRs to identify the frequently used CATs in SE and other fields. Results: We identified that the CATs currently used is SE were borrowed from medicine, but have not kept pace with substantial advancements in the field of medicine. Conclusion: In this paper, we have argued the need for a CAT for quality appraisal of SLRs in SE. We have also identified a tool that has the potential for application in SE. Furthermore, we have presented our approach for adapting this state-of-the-art CAT for assessing SLRs in SE. © 2019 The Authors

    Download full text (pdf)
    fulltext
  • 34.
    Ali, Nauman bin
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Usman, Muhammad
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Reliability of search in systematic reviews: Towards a quality assessment framework for the automated-search strategy2018In: Information and Software Technology, ISSN 0950-5849, E-ISSN 1873-6025, ISSN 0950-5849, Vol. 99, p. 133-147Article in journal (Refereed)
    Abstract [en]

    Context: The trust in systematic literature reviews (SLRs) to provide credible recommendations is critical for establishing evidence-based software engineering (EBSE) practice. The reliability of SLR as a method is not a given and largely depends on the rigor of the attempt to identify, appraise and aggregate evidence. Previous research, by comparing SLRs on the same topic, has identified search as one of the reasons for discrepancies in the included primary studies. This affects the reliability of an SLR, as the papers identified and included in it are likely to influence its conclusions. Objective: We aim to propose a comprehensive evaluation checklist to assess the reliability of an automated-search strategy used in an SLR. Method: Using a literature review, we identified guidelines for designing and reporting automated-search as a primary search strategy. Using the aggregated design, reporting and evaluation guidelines, we formulated a comprehensive evaluation checklist. The value of this checklist was demonstrated by assessing the reliability of search in 27 recent SLRs. Results: Using the proposed evaluation checklist, several additional issues (not captured by the current evaluation checklist) related to the reliability of search in recent SLRs were identified. These issues severely limit the coverage of literature by the search and also the possibility to replicate it. Conclusion: Instead of solely relying on expensive replications to assess the reliability of SLRs, this work provides means to objectively assess the likely reliability of a search-strategy used in an SLR. It highlights the often-assumed aspect of repeatability of search when using automated-search. Furthermore, by explicitly considering repeatability and consistency as sub-characteristics of a reliable search, it provides a more comprehensive evaluation checklist than the ones currently used in EBSE. © 2018 Elsevier B.V.

  • 35.
    Ali, Nauman
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Petersen, Kai
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Evaluating strategies for study selection in systematic literature studies2014In: ESEM '14 Proceedings of the 8th ACM/IEEE International Symposium on Empirical Software Engineering and Measurement, ACM , 2014, Vol. article 45Conference paper (Refereed)
    Abstract [en]

    Context: The study selection process is critical to improve the reliability of secondary studies. Goal: To evaluate the selection strategies commonly employed in secondary studies in software engineering. Method: Building on these strate- gies, a study selection process was formulated and evalu- ated in a systematic review. Results: The selection process used a more inclusive strategy than the one typically used in secondary studies, which led to additional relevant articles. Conclusions: The results indicates that a good-enough sam- ple could be obtained by following a less inclusive but more efficient strategy, if the articles identified as relevant for the study are a representative sample of the population, and there is a homogeneity of results and quality of the articles.

    Download full text (pdf)
    fulltext
  • 36.
    Allberg, Petrus
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Applied machine learning in the logistics sector: A comparative analysis of supervised learning algorithms2018Independent thesis Basic level (degree of Bachelor), 10 credits / 15 HE creditsStudent thesis
    Abstract [en]

    BackgroundMachine learning is an area that is being explored with great haste these days, which inspired this study to investigate how seven different supervised learning algorithms perform compared to each other. These algorithms were used to perform classification tasks on logistics consignments, the classification is binary and a consignment can either be classified as missed or not.

    ObjectivesThe goal was to find which of these algorithms perform well when used for this classification task and to see how the results varied with different sized datasets. Importance of the features which were included in the datasets has been analyzed with the intention of finding if there is any connection between human errors and these missed consignments.

    MethodsThe process from raw data to a predicted classification has many steps including data gathering, data preparation, feature investigation and more. Through cross-validation, the algorithms were all trained and tested upon the same datasets and then evaluated based on the metrics recall and accuracy.

    ResultsThe scores on both metrics increase with the size of the datasets, and when comparing the seven algorithms, two does not perform equally compared to the other five, which all perform moderately the same.

    Conclusions Any of the five algorithms mentioned prior can be chosen for this type of classification, or to further study based on other measurements, and there is an indication that human errors could play a part on whether a consignment gets classified as missed or not.

    Download full text (pdf)
    fulltext
  • 37.
    Almroth, Tobias
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Data visualization for the modern web: A look into tools and techniques for visualizing data in Angular 5 applications2018Independent thesis Basic level (university diploma), 10 credits / 15 HE creditsStudent thesis
    Abstract [en]

    This paper looks into how data is best visualized and how visualizations should be designed to be most easily perceived. Furthermore the study looks into what tools there are available on the market today for visualizing data in angular 5 applications. With regards to a client, a developer team from the swedish police IT-department, the tools are evaluated and the one most suitable for the client is found. The paper also looks into how a dynamic data solution can be developed in angular 5. A solution where data can be selected in one component and displayed in another.

    To answer the questions sought a study of previous research into data visualization was done as well as a look into how angular 5 applications can be developed. Interviews with the clients were held where their specific requirements on visualization tools were identified. After searching and listing available visualization tools on the market the tools were evaluated against the clients requirements and a prototype application were developed. Showcasing both the most suitable tool and its integration but also a dynamic data solution in angular 5.

    As a conclusion data visualizations should be made as simple as possible with the main focus on the data. When it comes to tools the one most suitable to the client was Chart.js that easily integrated into an angular 5 application. An application that thanks to angular’s features is well equipped for handling and developing dynamic data solutions.

    Download full text (pdf)
    BTH2018Almroth
  • 38.
    Alégroth, Emil
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Extending the Boundaries of Higher Education through Digitalization: On the best practices of Onlineand Blended Learning2020Report (Other (popular science, discussion, etc.))
    Abstract [en]

    Accessibility of higher education has never been more important and using online teaching, or e-learning, is a suitable way of achieving this access. However, online education presents new challenges for teachers, which requires best practices to overcome.

    Download full text (pdf)
    Emil Alegroth_Extending the boundaries of Higher Education
    Download (pdf)
    Presentation_Extending the Boundaries of Higher Education through Digitalization
  • 39.
    Alégroth, Emil
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Ardito, Luca
    Politecnico di Torino, Corso Duca degli Abruzzi, ITA.
    Coppola, Riccardo
    Politecnico di Torino, Corso Duca degli Abruzzi, ITA.
    Feldt, Robert
    Chalmers University of Technology, SWE.
    Special issue on new generations of UI testing2021In: Software testing, verification & reliability, ISSN 0960-0833, E-ISSN 1099-1689, Vol. 31, no 3, article id e1770Article in journal (Other academic)
    Download full text (pdf)
    fulltext
  • 40.
    Alégroth, Emil
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Feldt, Robert
    Chalmers, SWE.
    On the long-term use of visual gui testing in industrial practice: a case study2017In: Empirical Software Engineering, ISSN 1382-3256, E-ISSN 1573-7616, Vol. 22, no 6, p. 2937-2971Article in journal (Refereed)
    Abstract [en]

    Visual GUI Testing (VGT) is a tool-driven technique for automated GUI-based testing that uses image recognition to interact with and assert the correctness of the behavior of a system through its GUI as it is shown to the user. The technique’s applicability, e.g. defect-finding ability, and feasibility, e.g. time to positive return on investment, have been shown through empirical studies in industrial practice. However, there is a lack of studies that evaluate the usefulness and challenges associated with VGT when used long-term (years) in industrial practice. This paper evaluates how VGT was adopted, applied and why it was abandoned at the music streaming application development company, Spotify, after several years of use. A qualitative study with two workshops and five well chosen employees is performed at the company, supported by a survey, which is analyzed with a grounded theory approach to answer the study’s three research questions. The interviews provide insights into the challenges, problems and limitations, but also benefits, that Spotify experienced during the adoption and use of VGT. However, due to the technique’s drawbacks, VGT has been abandoned for a new technique/framework, simply called the Test interface. The Test interface is considered more robust and flexible for Spotify’s needs but has several drawbacks, including that it does not test the actual GUI as shown to the user like VGT does. From the study’s results it is concluded that VGT can be used long-term in industrial practice but it requires organizational change as well as engineering best practices to be beneficial. Through synthesis of the study’s results, and results from previous work, a set of guidelines are presented that aim to aid practitioners to adopt and use VGT in industrial practice. However, due to the abandonment of the technique, future research is required to analyze in what types of projects the technique is, and is not, long-term viable. To this end, we also present Spotify’s Test interface solution for automated GUI-based testing and conclude that it has its own benefits and drawbacks.

  • 41.
    Alégroth, Emil
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Gonzalez-Huerta, Javier
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Towards a mapping of software technical debt onto testware2017In: Proceedings - 43rd Euromicro Conference on Software Engineering and Advanced Applications, SEAA 2017, Institute of Electrical and Electronics Engineers Inc. , 2017, p. 404-411, article id 8051379Conference paper (Refereed)
    Abstract [en]

    Technical Debt (TD) is a metaphor used to explain the negative impacts that sub-optimal design decisions have in the long-term perspective of a software project. Although TD is acknowledged by both researchers and practitioners to have strong negative impact on Software development, its study on Testware has so far been very limited. A gap in knowledge that is important to address due to the growing popularity of Testware (scripted automated testing) in software development practice.In this paper we present a mapping analysis that connects 21 well-known, Software, object-oriented TD items to Testware, establishing them as Testware Technical Debt (TTD) items. The analysis indicates that most Software TD items are applicable or observable as TTD items, often in similar form and with roughly the same impact as for Software artifacts (e.g. reducing quality of the produced artifacts, lowering the effectiveness and efficiency of the development process whilst increasing costs). In the analysis, we also identify three types of connections between software TD and TTD items with varying levels of impact and criticality. Additionally, the study finds support for previous research results in which specific TTD items unique to Testware were identified. Finally, the paper outlines several areas of future research into TTD. © 2017 IEEE.

  • 42.
    Alégroth, Emil
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Gorschek, Tony
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Petersen, Kai
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Mattsson, Michael
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Characteristics that affect Preference of Decision Models for Asset Selection: An Industrial Questionnaire Survey2020In: Software quality journal, ISSN 0963-9314, E-ISSN 1573-1367, Vol. 28, no 4, p. 1675-1707Article in journal (Refereed)
    Abstract [en]

    Modern software development relies on a combination of development and re-use of technical asset, e.g. software components, libraries and APIs.In the past, re-use was mostly conducted with internal assets but today external; open source, customer off-the-shelf (COTS) and assets developed through outsourcing are also common.This access to more asset alternatives presents new challenges regarding what assets to optimally chose and how to make this decision.To support decision-makers, decision-theory has been used to develop decision models for asset selection.However, very little industrial data has been presented in literature about the usefulness, or even perceived usefulness, of these models.Additionally, only limited information has been presented about what model characteristics that determine practitioner preference towards one model over another.

    Objective: The objective of this work is to evaluate what characteristics of decision models for asset selection that determine industrial practitioner preference of a model when given the choice of a decision-model of high precision or a model with high speed.

    Method: An industrial questionnaire survey is performed where a total of 33 practitioners, of varying roles, from 18 companies are tasked to compare two decision models for asset selection.Textual analysis and formal and descriptive statistics are then applied on the survey responses to answer the study's research questions.

    Results: The study shows that the practitioners had clear preference towards the decision model that emphasised speed over the one that emphasised decision precision.This conclusion was determined to be because one of the models was perceived faster, had lower complexity, had, was more flexible in use for different decisions, was more agile how it could be used in operation, its emphasis on people, its emphasis on ``good enough'' precision and ability to fail fast if a decision was a failure.Hence, seven characteristics that the practitioners considered important for their acceptance of the model.

    Conclusion: Industrial practitioner preference, which relates to acceptance, of decision models for asset selection is dependent on multiple characteristics that must be considered when developing a model for different types of decisions such as operational day-to-day decisions as well as more critical tactical or strategic decisions.The main contribution of this work are seven identified characteristics that can serve as industrial requirements for future research on decision models for asset selection.

    Download full text (pdf)
    fulltext
  • 43.
    Alégroth, Emil
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Gorschek, Tony
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Petersen, Kai
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Mattsson, Michael
    Characteristics that affect Preference of Decision Models for Asset Selection: An Industrial Questionnaire Survey - Appendix A: Questionnaire Introduction. Decision-making in Practice / Appendix B: Survey results2019Data set
    Download full text (pdf)
    Appendix A: Questionnaire Introduction Decision-making in Practice
    Download full text (csv)
    Appendix B: Survey results
  • 44.
    Alégroth, Emil
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Gustafsson, Johan
    SAAB AB, SWE.
    Ivarsson, Henrik
    SAAB AB, SWE.
    Feldt, Robert
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Replicating Rare Software Failures with Exploratory Visual GUI Testing2017In: IEEE Software, ISSN 0740-7459, E-ISSN 1937-4194, Vol. 34, no 5, p. 53-59, article id 8048660Article in journal (Refereed)
    Abstract [en]

    Saab AB developed software that had a defect that manifested itself only after months of continuous system use. After years of customer failure reports, the defect still persisted, until Saab developed failure replication based on visual GUI testing. © 1984-2012 IEEE.

  • 45.
    Alégroth, Emil
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Karl, Kristian
    Spotify, SWE.
    Rosshagen, Helena
    AddQ, SWE.
    Helmfridsson, Tomas
    AddQ, SWE.
    Olsson, Nils
    ArcticBlue, SWE.
    Practitioners' best practices to Adopt, Use or Abandon Model-based Testing with Graphical models for Software-intensive Systems2022In: Empirical Software Engineering, ISSN 1382-3256, E-ISSN 1573-7616, Vol. 27, no 5, article id 103Article in journal (Refereed)
    Abstract [en]

    Model-based testing (MBT) has been extensively researched for software-intensive systems but, despite the academic interest, adoption of the technique in industry has been sparse. This phenomenon has been observed by our industrial partners for MBT with graphical models. They perceive one cause to be a lack of evidence-based MBT guidelines that, in addition to technical guidelines, also take non-technical aspects into account. This hypothesis is supported by a lack of such guidelines in the literature. Objective: The objective of this study is to elicit, and synthesize, MBT experts' best practices for MBT with graphical models. The results aim to give guidance to practitioners and aspire to give researchers new insights to inspire future research. Method: An interview survey is conducted using deep, semi-structured, interviews with an international sample of 17 MBT experts, in different roles, from software industry. Interview results are synthesised through semantic equivalence analysis and verified by MBT experts from industrial practice. Results: 13 synthesised conclusions are drawn from which 23 best-practice guidelines are derived for the adoption, use and abandonment of the technique. In addition, observations and expert insights are discussed that help explain the lack of wide-spread adoption of MBT with graphical models in industrial practice. Conclusions: Several technical aspects of MBT are covered by the results as well as conclusions that cover process- and organizational factors. These factors relate to the mindset, knowledge, organization, mandate and resources that enable the technique to be used effectively within an organization. The guidelines presented in this work complement existing knowledge and, as a primary objective, provide guidance for industrial practitioners to better succeed with MBT with graphical models.

    Download full text (pdf)
    fulltext
  • 46.
    Alégroth, Emil
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Karlsson, Arvid
    Cilbuper IT, Gothenburg, SWE.
    Radway, Alexander
    Techship Krokslatts Fabriker, SWE.
    Continuous Integration and Visual GUI Testing: Benefits and Drawbacks in Industrial Practice2018In: Proceedings - 2018 IEEE 11th International Conference on Software Testing, Verification and Validation, ICST 2018, Institute of Electrical and Electronics Engineers Inc. , 2018, p. 172-181Conference paper (Refereed)
    Abstract [en]

    Continuous integration (CI) is growing in industrial popularity, spurred on by market trends towards faster delivery and higher quality software. A key facilitator of CI is automated testing that should be executed, automatically, on several levels of system abstraction. However, many systems lack the interfaces required for automated testing. Others lack test automation coverage of the system under test's (SUT) graphical user interface (GUI) as it is shown to the user. One technique that shows promise to solve these challenges is Visual GUI Testing (VGT), which uses image recognition to stimulate and assert the SUT's behavior. Research has presented the technique's applicability and feasibility in industry but only limited support, from an academic setting, that the technique is applicable in a CI environment. In this paper we presents a study from an industrial design research study with the objective to help bridge the gap in knowledge regarding VGT's applicability in a CI environment in industry. Results, acquired from interviews, observations and quantitative analysis of 17.567 test executions, collected over 16 weeks, show that VGT provides similar benefits to other automated test techniques for CI. However, several significant drawbacks, such as high costs, are also identified. The study concludes that, although VGT is applicable in an industrial CI environment, its severe challenges require more research and development before the technique becomes efficient in practice. © 2018 IEEE.

  • 47.
    Alégroth, Emil
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Matsuki, Shinsuke
    Veriserve Corporation, JPN.
    Vos, Tanja
    Open University of the Netherlands, NLD.
    Akemine, Kinji
    Nippon Telegraph and Telephone Corporation, JPN.
    Overview of the ICST International Software Testing Contest2017In: Proceedings - 10th IEEE International Conference on Software Testing, Verification and Validation, ICST 2017, IEEE Computer Society, 2017, p. 550-551Conference paper (Refereed)
    Abstract [en]

    In the software testing contest, practitioners and researcher's are invited to test their test approaches against similar approaches to evaluate pros and cons and which is perceivably the best. The 2017 iteration of the contest focused on Graphical User Interface-driven testing, which was evaluated on the testing tool TESTONA. The winner of the competition was announced at the closing ceremony of the international conference on software testing (ICST), 2017. © 2017 IEEE.

  • 48.
    Alégroth, Emil
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Petersén, Elin
    Linköping University, SWE.
    Tinnerholm, John
    Linköping University, SWE.
    A Failed attempt at creating Guidelines for Visual GUI Testing: An industrial case study2021In: Proceedings - 2021 IEEE 14th International Conference on Software Testing, Verification and Validation, ICST 2021, Institute of Electrical and Electronics Engineers Inc. , 2021, p. 340-350, article id 9438551Conference paper (Refereed)
    Abstract [en]

    Software development is governed by guidelines that aim to improve the code's qualities, such as maintainability. However, whilst coding guidelines are commonplace for software, guidelines for testware are much less common. In particular, for GUI-based tests driven with image recognition, also referred to as Visual GUI Testing (VGT), explicit coding guidelines are missing.In this industrial case study, performed at the Swedish defence contractor Saab AB, we propose a set of coding guidelines for VGT and evaluate their impact on test scripts for an industrial, safety-critical system. To study the guidelines' effect on maintenance costs, five representative manual test cases are each translated with and without the proposed guidelines in the two VGT tools SikuliX and EyeAutomate. As such, 20 test scripts were developed, with a combined development cost of more than 100 man-hours. Three of the tests are then maintained by one researcher and two practitioners for another version of the system and costs measured to evaluate return on investment. This analysis is complemented with observations and interviews to elicit practitioners' perceptions and experiences with VGT.Results show that scripts developed with the guidelines had higher maintenance costs than scripts developed without guidelines. This is supported by qualitative results that many of the guidelines are considered inappropriate, superfluous or unnecessary due to the inherent properties of the scripts, e.g. their natural small size, linear flows, natural separation of concerns, and more. We conclude that there are differences between VGT scripts and software that prohibit direct translation of guidelines between the two. As such, we consider our study as a failure but argue that several lessons can be drawn from our results to guide future research into guidelines for VGT and GUI-based test automation. © 2021 IEEE.

    Download full text (pdf)
    fulltext
  • 49.
    Amaradri, Anand Srivatsav
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Nutalapati, Swetha Bindu
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Continuous Integration, Deployment and Testing in DevOps Environment2016Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Context. Owing to a multitude of factors like rapid changes in technology, market needs, and business competitiveness, software companies these days are facing pressure to deliver software rapidly and on a frequent basis. For frequent and faster delivery, companies should be lean and agile in all phases of the software development life cycle. An approach called DevOps, which is based on agile principles has come into play. DevOps bridges the gap between development and operations teams and facilitates faster product delivery. The DevOps phenomenon has gained a wide popularity in the past few years, and several companies are adopting DevOps to leverage its perceived benefits. However, the organizations may face several challenges while adopting DevOps. There is a need to obtain a clear understanding of how DevOps functions in an organization.

    Objectives. The main aim of this study is to provide a clear understanding about how DevOps works in an organization to researchers and software practitioners. The objectives of the study are to identify the benefits of implementing DevOps in organizations where agile development is in practice, the challenges faced by organizations during DevOps adoption, to identify the solutions/ mitigation strategies, to overcome the challenges,the DevOps practices, and the problems faced by DevOps teams during continuous integration, deployment and testing.

    Methods. A mixed methods approach having both qualitative and quantitative research methods is used to accomplish the research objectives.A Systematic Literature Review is conducted to identify the benefits and challenges of DevOps adoption, and the DevOps practices. Interviews are conducted to further validate the SLR findings, and identify the solutions to overcome DevOps adoption challenges, and the DevOps practices. The SLR and interview results are mapped, and a survey questionnaire is designed.The survey is conducted to validate the qualitative data, and to identify the other benefits and challenges of DevOps adoption, solutions to overcome the challenges, DevOps practices, and the problems faced by DevOps teams during continuous integration, deployment and testing.

    Results. 31 primary studies relevant to the research are identified for conducting the SLR. After analysing the primary studies, an initial list of the benefits and challenges of DevOps adoption, and the DevOps practices is obtained. Based on the SLR findings, a semi-structured interview questionnaire is designed, and interviews are conducted. The interview data is thematically coded, and a list of the benefits, challenges of DevOps adoption and solutions to overcome them, DevOps practices, and problems faced by DevOps teams is obtained. The survey responses are statistically analysed, and a final list of the benefits of adopting DevOps, the adoption challenges and solutions to overcome them, DevOps practices and problems faced by DevOps teams is obtained.

    Conclusions. Using the mixed methods approach, a final list of the benefits of adopting DevOps, DevOps adoption challenges, solutions to overcome the challenges, practices of DevOps, and the problems faced by DevOps teams during continuous integration, deployment and testing is obtained. The list is clearly elucidated in the document. The final list can aid researchers and software practitioners in obtaining a better understanding regarding the functioning and adoption of DevOps. Also, it has been observed that there is a need for more empirical research in this domain.

    Download full text (pdf)
    fulltext
  • 50.
    Ambala, Anvesh
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Exploring the Dynamics of Software Bill of Materials (SBOMs) and Security Integration in Open Source Projects2024Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Background.The rapid expansion of open-source software has introduced significant security challenges, particularly concerning supply chain attacks. Software supply chain attacks, such as the NotPetya attack, have underscored the critical need for robust security measures. Managing dependencies and protecting against such attacks have become important, leading to the emergence of Software Bill of Materials (SBOMs) as a crucial tool. SBOMs offer a comprehensive inventory of software components, aiding in identifying vulnerabilities and ensuring software integrity. Objectives. Investigate the information contained within SBOMs in Python and Gorepositories on GitHub. Analyze the evolution of SBOM fields over time to understand how software dependencies change. Examine the impact of the US Executive Order of May 2021 on the quality of SBOMs across software projects. Conduct dynamic vulnerability scans in repositories with SBOMs, focusing on identifying types and trends of vulnerabilities. Methods. The study employs archival research and quasi-experimentation, leveraging data from GitHub repositories. This approach facilitates a comprehensive analysis of SBOM contents, their evolution, and the impact of policy changes and security measures on software vulnerability trends. Results. The study reveals that SBOMs are becoming more complex as projects grow, with Python projects generally having more components than Go projects. Both ecosystems saw reductions in vulnerabilities in later versions. The US Executive Order of 2021 positively impacted SBOM quality, with measures like structural elements and NTIA guidelines showing significant improvements post-intervention. Integrating security scans with SBOMs helped identify a wide range of vulnerabilities. Projects varied in critical vulnerabilities, highlighting the need for tailored security strategies. CVSS scores and CWE IDs provided insights into vulnerability severity and types. Conclusions. The thesis highlights the crucial role of SBOMs in improving software security practices in open-source projects. It shows that policy interventions like the US Executive Order and security scans can significantly enhance SBOM quality, leading to better vulnerability management and detection strategies. The findings contribute to the development of robust dependency management and vulnerability detection methodologies in open-source software projects.

    Download full text (pdf)
    fulltext
1234567 1 - 50 of 1165
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf