Change search
Refine search result
2345678 201 - 250 of 543
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Rows per page
  • 5
  • 10
  • 20
  • 50
  • 100
  • 250
Sort
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
Select
The maximal number of hits you can export is 250. When you want to export more records please use the Create feeds function.
  • 201.
    Ilyas, Bilal
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Elkhalifa, Islam
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Static Code Analysis: A Systematic Literature Review and an Industrial Survey2016Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Context: Static code analysis is a software verification technique that refers to the process of examining code without executing it in order to capture defects in the code early, avoiding later costly fixations. The lack of realistic empirical evaluations in software engineering has been identified as a major issue limiting the ability of research to impact industry and in turn preventing feedback from industry that can improve, guide and orient research. Studies emphasized rigor and relevance as important criteria to assess the quality and realism of research. The rigor defines how adequately a study has been carried out and reported, while relevance defines the potential impact of the study on industry. Despite the importance of static code analysis techniques and its existence for more than three decades, the number of empirical evaluations in this field are less in number and do not take into account the rigor and relevance into consideration.

    Objectives: The aim of this study is to contribute toward bridging the gap between static code analysis research and industry by improving the ability of research to impact industry and vice versa. This study has two main objectives. First, developing guidelines for researchers, which will explore the existing research work in static code analysis research to identify the current status, shortcomings, rigor and industrial relevance of the research, reported benefits/limitations of different static code analysis techniques, and finally, give recommendations to researchers to help improve the future research to make it more industrial oriented. Second, developing guidelines for practitioners, which will investigate the adoption of different static code analysis techniques in industry and identify benefits/limitations of these techniques as perceived by industrial professionals. Then cross-analyze the findings of the SLR and the surbvey to draw final conclusions, and finally, give recommendations to professionals to help them decide which techniques to adopt.

    Methods: A sequential exploratory strategy characterized by the collection and analysis of qualitative data (systematic literature review) followed by the collection and analysis of quantitative data (survey), has been used to conduct this research. In order to achieve the first objective, a thorough systematic literature review has been conducted using Kitchenham guidelines. To achieve the second study objective, a questionnaire-based online survey was conducted, targeting professionals from software industry in order to collect their responses regarding the usage of different static code analysis techniques, as well as their benefits and limitations. The quantitative data obtained was subjected to statistical analysis for the further interpretation of the data and draw results based on it.

    Results: In static code analysis research, inspection and static analysis tools received significantly more attention than the other techniques. The benefits and limitations of static code analysis techniques were extracted and seven recurrent variables were used to report them. The existing research work in static code analysis field significantly lacks rigor and relevance and the reason behind it has been identified. Somre recommendations are developed outlining how to improve static code analysis research and make it more industrial oriented. From the industrial point of view, static analysis tools are widely used followed by informal reviews, while inspections and walkthroughs are rarely used. The benefits and limitations of different static code analysis techniques, as perceived by industrial professionals, have been identified along with the influential factors.

    Conclusions: The SLR concluded that the techniques having a formal, well-defined process and process elements have receive more attention in research, however, this doesn’t necessarily mean that technique is better than the other techniques. The experiments have been used widely as a research method in static code analysis research, but the outcome variables in the majority of the experiments are inconsistent. The use of experiments in academic context contributed nothing to improve the relevance, while the inadequate reporting of validity threats and their mitigation strategies contributed significantly to poor rigor of research. The benefits and limitations of different static code analysis techniques identified by the SLR could not complement the survey findings, because the rigor and relevance of most of the studies reporting them was weak. The survey concluded that the adoption of static code analysis techniques in the industry is more influenced by the software life-cycle models in practice in organizations, while software product type and company size do not have much influence. The amount of attention a static code analysis technique has received in research doesn’t necessarily influence its adoption in industry which indicates a wide gap between research and industry. However, the company size, product type, and software life-cycle model do influence professionals perception on benefits and limitations of different static code analysis techniques.

  • 202.
    Irshad, Mohsin
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Assessing Reusability in Automated Acceptance Tests2018Licentiate thesis, comprehensive summary (Other academic)
    Abstract [en]

    Context: Automated acceptance tests have become a core practice of agile software development (e.g. Extreme Programming). These tests are closely tied to requirements specifications and these tests provide a mechanism for continuous validation of software requirements. Software reuse has evolved with the introduction of each new reusable artefact (e.g., reuse of code, reuse of frameworks, tools etc.). In this study, we have investigated the reusability of automated acceptance tests keeping in view their close association with textual requirements.

    Objective: As automated acceptance tests are closely related to software requirements, we have used existing research in software engineering to identify reusability related characteristics of software requirements and used these characteristics for automated acceptance tests.This study attempts to address the following aspects: (i) what important reuse characteristics should be considered when measuring reusability of automated acceptance tests? (ii) how reusability can be measured in automated acceptance tests?, and (iii) how cost avoided through reuse of automated acceptance tests can be calculated?

    Method: We have used a combination of research methods to answer different aspects of our study. We started by identifying reusability related characteristics of software requirements, with help of systematic literature review. Later, we tried to identify the reusability related characteristics of defect reports and the process is documented using an experience report. After identifying the characteristics from the previous two studies, we used these characteristics on two case-studies conducted on Behaviour driven development test cases (i.e., acceptance tests of textual nature). We proposed two approaches that can identify the reuse potential of automated acceptance tests and evaluated these approaches in the industry. Later, to calculate the cost avoided through reuse, we proposed and evaluated a method that is applicable to any reusable artifact.

    Results: The results from the systematic literature review shows that text-based requirements reuse approaches are most commonly used in the industry. Structuring these text-based requirements and identifying the reusable requirements by matching are the two commonly used methods for enabling requirements to reuse. The results from the second study, industrial experience report, indicates that defect reports can be formulated in template and defect triage meeting can be used to identify important test-cases related to defect reports. The results from these two studies, text-based requirements reuse approaches and template based defect reports, were included when identifying approaches to measure reuse potential of BDD test-cases. The two proposed approaches, Normalised Compression Distance (NCD) and Similarity Ratio, for measuring reuse potential were evaluated in the industry. The evaluation indicated that Similarity ratio approach performed better than the NCD approach, however, the results from both approaches were comparable with the results gathered with the help of expert analysis. The cost related aspects of reusable acceptance tests were addressed and evaluated using a method that calculates the cost-avoidance through reuse. The industrial evaluation of the method and guidelines show that the method is an artifact independent method. 

    Conclusions: The evidence from this study shows that the automated acceptance tests are reusable, similar to text-based software requirements and their reuse potential can be calculated as well. The industrial evaluation of the three studies (i.e. approaches to measure reuse potential, calculation of cost avoidance and defect reports in triage meetings) shows that the overall results are applicable to the industry. However, further work is required to evaluate the reuse potential of automated acceptance tests in different contexts. 

  • 203.
    Irshad, Mohsin
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Petersen, Kai
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Poulding, Simon
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    A systematic literature review of software requirements reuse approaches2018In: Information and Software Technology, ISSN 0950-5849, E-ISSN 1873-6025, Vol. 93, no Jan, p. 223-245Article in journal (Refereed)
    Abstract [en]

    Context: Early software reuse is considered as the most beneficial form of software reuse. Hence, previous research has focused on supporting the reuse of software requirements. Objective: This study aims to identify and investigate the current state of the art with respect to (a) what requirement reuse approaches have been proposed, (b) the methods used to evaluate the approaches, (c) the characteristics of the approaches, and (d) the quality of empirical studies on requirements reuse with respect to rigor and relevance. Method: We conducted a systematic review and a combination of snowball sampling and database search have been used to identify the studies. The rigor and relevance scoring rubric has been used to assess the quality of the empirical studies. Multiple researchers have been involved in each step to increase the reliability of the study. Results: Sixty-nine studies were identified that describe requirements reuse approaches. The majority of the approaches used structuring and matching of requirements as a method to support requirements reuse and text-based artefacts were commonly used as an input to these approaches. Further evaluation of the studies revealed that the majority of the approaches are not validated in the industry. The subset of empirical studies (22 in total) was analyzed for rigor and relevance and two studies achieved the maximum score for rigor and relevance based on the rubric. It was found that mostly text-based requirements reuse approaches were validated in the industry. Conclusion: From the review, it was found that a number of approaches already exist in literature, but many approaches are not validated in industry. The evaluation of rigor and relevance of empirical studies show that these do not contain details of context, validity threats, and the industrial settings, thus highlighting the need for the industrial evaluation of the approaches. © 2017 Elsevier B.V.

  • 204.
    Irshad, Mohsin
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Torkar, Richard
    Petersen, Kai
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Afzal, Wasif
    Capturing cost avoidance through reuse: Systematic literature review and industrial evaluation2016In: ACM International Conference Proceeding Series, ACM Press, 2016, Vol. 01-03-June-2016Conference paper (Refereed)
    Abstract [en]

    Background: Cost avoidance through reuse shows the benefits gained by the software organisations when reusing an artefact. Cost avoidance captures benefits that are not captured by cost savings e.g. spending that would have increased in the absence of the cost avoidance activity. This type of benefit can be combined with quality aspects of the product e.g. costs avoided because of defect prevention. Cost avoidance is a key driver for software reuse. Objectives: The main objectives of this study are: (1) To assess the status of capturing cost avoidance through reuse in the academia; (2) Based on the first objective, propose improvements in capturing of reuse cost avoidance, integrate these into an instrument, and evaluate the instrument in the software industry. Method: The study starts with a systematic literature review (SLR) on capturing of cost avoidance through reuse. Later, a solution is proposed and evaluated in the industry to address the shortcomings identified during the systematic literature review. Results: The results of a systematic literature review describe three previous studies on reuse cost avoidance and show that no solution, to capture reuse cost avoidance, was validated in industry. Afterwards, an instrument and a data collection form are proposed that can be used to capture the cost avoided by reusing any type of reuse artefact. The instrument and data collection form (describing guidelines) were demonstrated to a focus group, as part of static evaluation. Based on the feedback, the instrument was updated and evaluated in industry at 6 development sites, in 3 different countries, covering 24 projects in total. Conclusion: The proposed solution performed well in industrial evaluation. With this solution, practitioners were able to do calculations for reuse costs avoidance and use the results as decision support for identifying potential artefacts to reuse.

  • 205.
    Jabangwe, Ronald
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Software Quality Evaluation for Evolving Systems in Distributed Development Environments2015Doctoral thesis, comprehensive summary (Other academic)
    Abstract [en]

    Context: There is an overwhelming prevalence of companies developing software in global software development (GSD) contexts. The existing body of knowledge, however, falls short of providing comprehensive empirical evidence on the implication of GSD contexts on software quality for evolving software systems. Therefore there is limited evidence to support practitioners that need to make informed decisions about ongoing or future GSD projects. Objective: This thesis work seeks to explore changes in quality, as well as to gather confounding factors that influence quality, for software systems that evolve in GSD contexts. Method: The research work in this thesis includes empirical work that was performed through exploratory case studies. This involved analysis of quantitative data consisting of defects as an indicator for quality, and measures that capture software evolution, and qualitative data from company documentations, interviews, focus group meetings, and questionnaires. An extensive literature review was also performed to gather information that was used to support the empirical investigations. Results: Offshoring software development work, to a location that has employees with limited or no prior experience with the software product, as observed in software transfers, can have a negative impact on quality. Engaging in long periods of distributed development with an offshore site and eventually handing over all responsibilities to the offshore site can be an alternative to software transfers. This approach can alleviate a negative effect on quality. Finally, the studies highlight the importance of taking into account the GSD context when investigating quality for software that is developed in globally distributed environments. This helps with making valid inferences about the development settings in GSD projects in relation to quality. Conclusion: The empirical work presented in this thesis can be useful input for practitioners that are planning to develop software in globally distributed environments. For example, the insights on confounding factors or mitigation practices that are linked to quality in the empirical studies can be used as input to support decision-making processes when planning similar GSD projects. Consequently, lessons learned from the empirical investigations were used to formulate a method, GSD-QuID, for investigating quality using defects for evolving systems. The method is expected to help researchers avoid making incorrect inferences about the implications of GSD contexts on quality for evolving software systems, when using defects as a quality indicator. This in turn will benefit practitioners that need the information to make informed decisions for software that is developed in similar circumstances.

  • 206.
    Jabangwe, Ronald
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Börstler, Jürgen
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Petersen, Kai
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Handover of managerial responsibilities in global software development: a case study of source code evolution and quality2015In: Software quality journal, ISSN 0963-9314, E-ISSN 1573-1367, Vol. 23, no 4, p. 539-566Article in journal (Refereed)
    Abstract [en]

    Studies report on the negative effect on quality in global software development (GSD) due to communication and coordination-related challenges. However, empirical studies reporting on the magnitude of the effect are scarce. This paper presents findings from an embedded explanatory case study on the change in quality over time, across multiple releases, for products that were developed in a GSD setting. The GSD setting involved periods of distributed development between geographically dispersed sites as well as a handover of project management responsibilities between the involved sites. Investigations were performed on two medium-sized products from a company that is part of a large multinational corporation. Quality is investigated quantitatively using defect data and measures that quantify two source code properties, size and complexity. Observations were triangulated with subjective views from company representatives. There were no observable indications that the distribution of work or handover of project management responsibilities had an impact on quality on both products. Among the product-, process- and people-related success factors, we identified well-designed product architectures, early handover planning and support from the sending site to the receiving site after the handover and skilled employees at the involved sites. Overall, these results can be useful input for decision-makers who are considering distributing development work between globally dispersed sites or handing over project management responsibilities from one site to another. Moreover, our study shows that analyzing the evolution of size and complexity properties of a product’s source code can provide valuable information to support decision-making during similar projects. Finally, the strategy used by the company to relocate responsibilities can also be considered as an alternative for software transfers, which have been linked with a decline in efficiency, productivity and quality.

  • 207.
    Jabangwe, Ronald
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Wohlin, Claes
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Petersen, Kai
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Šmite, Darja
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Börstler, Jürgen
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    A method for investigating the quality of evolving object-oriented software using defects in global software development projects2016In: Journal of Software: Evolution and Process, ISSN 2047-7473, E-ISSN 2047-7481, Vol. 28, no 8, p. 622-641Article in journal (Refereed)
    Abstract [en]

    Context: Global software development (GSD) projects can have distributed teams that work independently in different locations or team members that are dispersed. The various development settings in GSD can influence quality during product evolution. When evaluating quality using defects as a proxy, the development settings have to be taken into consideration. Objective: The aim is to provide a systematic method for supporting investigations of the implication of GSD contexts on defect data as a proxy for quality. Method: A method engineering approach was used to incrementally develop the proposed method. This was done through applying the method in multiple industrial contexts and then using lessons learned to refine and improve the method after application. Results: A measurement instrument and visualization was proposed incorporating an understanding of the release history and understanding of GSD contexts. Conclusion: The method can help with making accurate inferences about development settings because it includes details on collecting and aggregating data at a level that matches the development setting in a GSD context and involves practitioners at various phases of the investigation. Finally, the information that is produced from following the method can help practitioners make informed decisions when planning to develop software in comparable circumstances. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  • 208.
    Jabangwe, Ronald
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering. Lero / Regulated Software Research Centre.
    Šmite, Darja
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Hessbo, Emil
    Distributed Software Development in an Offshore Outsourcing Project: A Case Study of Source Code Evolution and Quality2016In: Information and Software Technology, ISSN 0950-5849, E-ISSN 1873-6025, Vol. 72, p. 125-136Article in journal (Refereed)
    Abstract [en]

    Context: Offshore outsourcing collaborations can result in distributed development, which has been linked to quality-related concerns. However, there are few studies that focus on the implication of distributed development on quality, and they report inconsistent findings using different proxies for quality. Thus, there is a need for more studies, as well as to identify useful proxies for certain distributed contexts. The presented empirical study was performed in a context that involved offshore outsourcing vendors in a multisite distributed development setting.

    Objective: The aim of the study is to investigate how quality changes during evolution in a distributed development environment that incurs organizational changes in terms of number of companies involved.

    Method: A case study approach is followed in the investigation. Only post-release defects are used as a proxy for external quality due to unreliable defect data found pre-release such as those reported during integration. Focus group meetings were also held with practitioners.

    Results: The results suggest that practices that can be grouped into product, people, and process categories can help ensure post-release quality. However, post-release defects are insufficient for showing a conclusive impact on quality of the development setting. This is because the development teams worked independently as isolated distributed teams, and integration defects would help to better reflect on the impact on quality of the development setting.

    Conclusions: The mitigation practices identified can be useful information to practitioners that are planning to engage in similar globally distributed development projects. Finally, it is important to take into consideration the arrangement of distributed development teams in global projects, and to use the context to identify appropriate proxies for quality in order to draw correct conclusions about the implications of the context. This would help with providing practitioners with well-founded findings about the impact on quality of globally distributed development settings.

  • 209.
    Jabbari, Ramtin
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Ali, Nauman bin
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Petersen, Kai
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Tanveer, Binish
    Fraunhofer Institute for Experimental Software Engineering IESE, DEU.
    Towards a benefits dependency network for DevOps based on a systematic literature review2018In: Journal of Software: Evolution and Process, ISSN 2047-7473, E-ISSN 2047-7481, Vol. 30, no 11, article id e1957Article in journal (Refereed)
    Abstract [en]

    DevOps as a new way of thinking for software development and operations has received much attention in the industry, while it has not been thoroughly investigated in academia yet. The objective of this study is to characterize DevOps by exploring its central components in terms of principles, practices and their relations to the principles, challenges of DevOps adoption, and benefits reported in the peer-reviewed literature. As a key objective, we also aim to realize the relations between DevOps practices and benefits in a systematic manner. A systematic literature review was conducted. Also, we used the concept of benefits dependency network to synthesize the findings, in particular, to specify dependencies between DevOps practices and link the practices to benefits. We found that in many cases, DevOps characteristics, ie, principles, practices, benefits, and challenges, were not sufficiently defined in detail in the peer-reviewed literature. In addition, only a few empirical studies are available, which can be attributed to the nascency of DevOps research. Also, an initial version of the DevOps benefits dependency network has been derived. The definition of DevOps principles and practices should be emphasized given the novelty of the concept. Further empirical studies are needed to improve the benefits dependency network presented in this study. © 2018 John Wiley & Sons, Ltd.

  • 210.
    Jabbari, Ramtin
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Ali, Nauman Bin
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Petersen, Kai
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Tanveer, Binish
    What is DevOps?: A Systematic Mapping Study on Definitions and Practices2016Conference paper (Refereed)
  • 211.
    Jain, Aman
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Aduri, Raghu ram
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Quality metrics in continuous delivery: A mixed approach2016Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Context. Continuous delivery deals with concept of deploying the user stories as soon as they are finished rather than waiting for the sprint to end. This concept increases the chances of early improvement to the software and provides the customer with a clear view of the final product that is expected from the software organization, but little research has been done on the quality of product developed and the ways to measure it. This research is conducted in the context of presenting a checklist of quality metrics that can be used by the practitioners to ensure good quality product delivery.

    Objectives. In this study, the authors strive towards the accomplishment of the following objectives: the first objective is to identify the quality metrics being used in agile approaches and continuous delivery by the organizations. The second objective is to evaluate the usefulness of the identified metrics, limitations of the metrics and identify new metrics. The final objective is to is to present and evaluate a solution i.e., checklist of metrics that can be used by practitioners to ensure quality of product developed using continuous delivery.

    Methods. To accomplish the objectives, the authors used mixture of approaches. First literature review was performed to identify the quality metrics being used in continuous delivery. Based on the data obtained from the literature review, the authors performed an online survey using a questionnaire posted over an online questionnaire hosting website. The online questionnaire was intended to find the usefulness of identified metrics, limitations of using metrics and also to identify new metrics based on the responses obtained for the online questionnaire. The authors conducted interviews and the interviews comprised of few close-ended questions and few open-ended questions which helped the authors to validate the usage of the metrics checklist.

    Results. Based on the LR performed at the start of the study, the authors obtained data regarding the background of continuous delivery, research performed over continuous delivery by various practitioners as well as a list of quality metrics used in continuous delivery. Later, the authors conducted an online survey using questionnaire that resulted in ranking the usefulness of quality metrics and identification of new metrics used in continuous delivery. Based on the data obtained from the online questionnaire, a checklist of quality metrics involved in continuous delivery was generated.

    Conclusions. Based on the interviews conducted to validate the checklist of metrics (generated as a result of the online questionnaire), the authors conclude that the checklist of metrics is fit for use in industry, but with some necessary changes made to the checklist based on the project requirements. The checklist will act as a reminder to the practitioners regarding the quality aspects that need to be measured during product development and maybe as a starting point while planning metrics that need to be measured during the project.

  • 212.
    Jalali, Samireh
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Wohlin, Claes
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Angelis, Lefteris
    Investigating the Applicability of Agility Assessment Surveys: A Case Study2014In: Journal of Systems and Software, ISSN 0164-1212 , Vol. 98, p. 172-190Article in journal (Refereed)
    Abstract [en]

    Context: Agile software development has become popular in the past decade despite that it is not a particularly well-defined concept. The general principles in the Agile Manifesto can be instantiated in many different ways, and hence the perception of Agility may differ quite a lot. This has resulted in several conceptual frameworks being presented in the research literature to evaluate the level of Agility. However, the evidence of actual use in practice of these frameworks is limited. Objective: The objective in this paper is to identify online surveys that can be used to evaluate the level of Agility in practice, and to evaluate the surveys in an industrial setting. Method: Surveys for evaluating Agility were identified by systematically searching the web. Based on an exploration of the surveys found, two surveys were identified as most promising for our objective. The two surveys selected were evaluated in a case study with three Agile teams in a software consultancy company. The case study included a self-assessment of the Agility level by using the two surveys, interviews with the Scrum master and a team representative, interviews with the customers of the teams and a focus group meeting for each team. Results: The perception of team Agility was judged by each of the teams and their respective customer, and the outcome was compared with the results from the two surveys. Agility profiles were created based on the surveys. Conclusions: It is concluded that different surveys may very well judge Agility differently, which support the viewpoint that it is not a well-defined concept. The researchers and practitioners agreed that one of the surveys, at least in this specific case, provided a better and more holistic assessment of the Agility of the teams in the case study.

  • 213.
    Ji, Yuan
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Zheng, Hengyuan
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    The Challenge for Practitioners to Adopt Requirement Prioritization Techniques in Practice2019Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Background: Requirements prioritization and its technique is still an important research topic. However, industry adoption of its techniques is still lack of research and has many challenges. As well, this topic involves the technology transfer.

    Objectives: The objective of this study is to find what challenges for practitioner to adopt requirement prioritization techniques in practice.

    Methods: We use a literature review and twice interview-based surveys. The literature review studies requirement prioritization techniques in literature. The 1st interview studies the status of practitioner’s used techniques and 2nd interview studies the practitioner’s idea towards recommended techniques in literature as well as adoption challenges. The data of interview is mainly analyzed by thematic analysis.

    Results: The literature review presents the procedure of 49 requirement prioritization techniques in literatures. The 1st time interview presents the technique procedures and other conditions of 11 practitioners. With above 2 results, we find the technique recommended to these 11 interviewees and then conduct the 2nd time interview to discover more interviewees’ ideas and the challenges of technique adoption, which are also compared with related works.

    Conclusions: Overall, there are many challenges for practitioner to adopt the requirement prioritization technique. As an independent subject, the practitioner’s adoption of prioritization technique still needs to be studied further: 1. Studying this subject needs to involve the scope of technology transfer; 2. Some challenges in requirement prioritization can also hamper the practitioner’s technique adoption and should be alleviated separately.

  • 214.
    Jiang Axelsson, Bohui
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    A LTE UPCUL architecture design combining Multi-Blackboards and Pipes & Filters architectures2015Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Context. The single blackboard architecture is widely used in the LTE application area. Despite its several benefits, this architecture limits synchronization possibilities of the developed systems and increases the signal operational latency. As a result the DSP (Digital Signal Processing) utilization is suboptimal.

    Objectives. In this thesis, we design a new architecture, which combines concepts of Multi-Blackboards and Pipes & Filters architectures, as a replacement for the current single blackboard architecture at Ericsson. The implementation of the new architecture makes the environment asynchronous. We evaluate the new architecture at simulated environment of Ericsson with 222225 connection items from 9000 base stations all over the world. Each connection item has a complete UE session and one of possible connection statuses, e.g. active, inactive, connected, DRX sleeping, postponed. These connection items can be from any country in the world.

    Methods. We design a new architecture for UPCUL component of LTE network based on analysis of real network data from Ericsson. We perform a case study to develop and evaluate the new architecture at Ericsson.

    Results. We evaluate the new architecture by performing a case study at Ericsson. The results of case study show that the new architecture not only increases DSP utilization by 35%, but also decreases signal operational latency by 53%, FO operation time by 20% and FO operation cycles by 20%. Also, the new architecture increases correctness performance.

    Conclusions.  We conclude that the new architecture increases DSP utilization and decreases the signal operational latency, therefore, improves performances of UPCUL component of LTE.  Due to time constraints, we only considered four LTE FOs (Function Objects) and relative signals. Future work should focus mainly on the other FOs and signals. We also analyze unconsidered FOs, and make an integration solution table which contains solutions to integrate these unconsidered FOs into the new architecture.  The second avenue for future work is to re-size the size of the two blackboard storages. We find out that the maximum memory size of needed UE sessions per sub-frame is only 1.305% of the memory size of all UE sessions (31650 bytes). So the memory size of blackboard storage should be adjusted on the basis of needed UE sessions instead of all UE sessions.

  • 215.
    Jiang, Haozhen
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Chen, Yi
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Comparison of Different Techniques of Web GUI-based Testing with the Representative Tools Selenium and EyeSel2017Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Context. Software testing is becoming more and more important in software development life-cycle especially for web testing. Selenium is one of the most widely used property-based Graph-User-Interface(GUI) web testing tools. Nevertheless, it also has some limitations. For instance, Selenium cannot test the web components in some specific plugins or HTML5 videos frame. But it is important for testers to verify the functionality of plugins or videos on the websites. Recently, the theory of the image recognition-based GUI testing is introduced which can locate and interact with the components to be tested on the websites by image recognition. There are only a few papers do research on comparing property-based GUI web testing and image recognition-based GUI testing. Hence, we formulated our research objectives based on this main gap.

    Objectives. We want to compare these two different techniques with EyeSel which is the tool represents the image recognition-based GUI testing and Selenium which is the tool represents the property-based GUI testing. We will evaluate and compare the strengths and drawbacks of these two tools by formulating specific JUnit testing scripts. Besides, we will analyze the comparative result and then evaluate if EyeSel can solve some of the limitations associated with Selenium. Therefore, we can conclude the benefits and drawbacks of property-based GUI web testing and image recognition-based GUI testing.  

    Methods. We conduct an experiment to develop test cases based on websites’ components both by Selenium and EyeSel. The experiment is conducted in an educational environment and we select 50 diverse websites as the subjects of the experiment. The test scripts are written in JAVA and ran by Eclipse.  The experiment data is collected for comparing and analyzing these two tools.

    Results. We use quantitative analysis and qualitative analysis to analyze our results. First of all, we use quantitative analysis to evaluate the effectiveness and efficiency of two GUI web testing tools. The effectiveness is measured by the number of components that can be tested by these two tools while the efficiency is measured by the measurements of test cases’ development time and execution time. The results are as follows (1) EyeSel can test more number of components in web testing than Selenium (2) Testers need more time to develop test cases by Selenium than by EyeSel (3) Selenium executes the test cases faster than EyeSel. (4) “Results (1)” indicates the effectiveness of EyeSel is better than Selenium while “Results (2)(3)” indicate the efficiency of EyeSel is better than Selenium. Secondly, we use qualitative analysis to evaluate four quality characteristics (learnability, robustness, portability, functionality) of two GUI web testing tools. The results show that portability and functionality of Selenium are better than EyeSel while the learnability of EyeSel is better than Selenium. And both of them have good robustness in web testing.

    Conclusions. After analyzing the results of comparison between Selenium and EyeSel, we conclude that (1) Image recognition-based GUI testing is more effectiveness than property-based GUI web testing (2) Image recognition-based GUI testing is more efficiency than property-based GUI web testing (3) The portability and functionality of property-based GUI web testing is better than Image recognition-based GUI testing (4) The learnability of image recognition-based GUI testing is better than property-based GUI web testing. (5) Both of them are good at different aspects of robustness

  • 216.
    Johnell, Carl
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Parallel programming in Go and Scala: A performance comparison2015Independent thesis Basic level (degree of Bachelor), 10 credits / 15 HE creditsStudent thesis
    Abstract [en]

        This thesis provides a performance comparison of parallel programming in Go and Scala. Go supports concurrency through goroutines and channels. Scala have parallel collections, futures and actors that can be used for concurrent and parallel programming. The experiment used two different types of algorithms to compare the performance between Go and Scala. Parallel versions of matrix multiplication and matrix chain multiplication were implemented with goroutines and channels in Go. Matrix multiplication was implemented with parallel collections and futures in Scala, and chain multiplication was implemented with actors.

        The results from the study shows that Scala has better performance than Go, parallel matrix multiplication was about 3x faster in Scala. However, goroutines and channels are more efficient than actors. Go performed better than Scala when the number of goroutines and actors increased in the benchmark for parallel chain multiplication.

        Both Go and Scala have features that makes parallel programming easier, but I found Go as a language was easier to learn and understand than Scala. I recommend anyone interested in Go to try it out because of its ease of use.

  • 217.
    Josyula, Jitendra Rama Aswadh
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Panamgipalli, Soma Sekhara Sarat Chandra
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Identifying the information needs and sources of software practitioners.: A mixed method approach.2016Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Context. Every day software practitioners need information for resolving a number of questions. This information need should be identified and addressed in order to successfully develop and deliver a software system. One of the ways to address these information needs is to make use of some information sources like blogs, websites, documentation etc. Identifying the needs and sources of software practitioners can improve and benefit the practitioners as well as the software development process. But the information needs and sources of software practitioners are partially studied and rather it is mostly focused on the knowledge management in software engineering. So, the current context of this study is to identify the information needs and information sources of software practitioners and also to investigate the practitioner’s perception on different information sources.           

    Objectives. In this study we primarily investigated some of the information needs of software practitioners and the information sources that they use to fulfill their needs. Secondly we investigated the practitioner’s perception on available information sources by identifying the aspects that they consider while using different information sources. 

    Methods. To achieve the research objectives this study conducted an empirical investigation by performing a survey, with two data collection techniques. A simple literature review was also performed initially to identify some of the information needs and sources of software practitioners. Then we started survey by conducting the semi-structured interviews, based on the data obtained from the literature. Moreover, an online questionnaire was designed, after conducting the preliminary analysis of the data obtained from both the interviews and literature review. Coding process of grounded theory was used for analyzing the data obtained from the interviews and descriptive statistics were used for analyzing the data obtained from the online questionnaire. The data obtained from both the qualitative and quantitative methods is triangulated by comparing the identified information needs and sources with those that are presented in the literature. 

    Results. From the preliminary literature review, we identified seven information needs and six information sources. Based on the results of the literature review, we then conducted interviews with software practitioners and identified nine information needs and thirteen information sources. From the interviews we also investigated the aspects that software practitioners look into, while using different information sources and thus we identified four major aspects. We then validated the results from the literature review and interviews with the help of an online questionnaire. From the online questionnaire, we finally identified the frequency of occurrence of the identified information needs and the frequency of use of different information sources.     

    Conclusions. We identified that the software practitioners are currently facing nine type of information needs, out of which, information on clarifying the requirements and information on produce design and architecture are the most frequently faced needs. To address these needs most of the practitioners are using the information sources like blogs and community forums, product documentation and discussion with colleagues. While the research articles are moderately used and the IT magazines and social networking sites are least used to address their needs. We also identified that most of the practitioners consider the reliability/accuracy of the information source as an extremely important factor. The identified information needs and sources along with the practitioner’s perception are clearly elucidated in the document.

    A future direction of this work could be, testing the applicability of the identified information needs by extending the sample population. Also, there is a scope for research on how the identified information needs can be minimized to make the information acquisition more easy for the practitioners. 

  • 218.
    Josyula, Jitendra
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Panamgipalli, Sarat
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Usman, Muhammad
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Britto, Ricardo
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Ali, Nauman bin
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Software Practitioners' Information Needs and Sources: A Survey Study2018In: 2018 9TH INTERNATIONAL WORKSHOP ON EMPIRICAL SOFTWARE ENGINEERING IN PRACTICE (IWESEP), IEEE , 2018, p. 1-6Conference paper (Refereed)
    Abstract [en]

    Software engineering practitioners have information needs to support strategic, tactical and operational decision-making. However, there is scarce research on understanding which information needs exist and how they are currently fulfilled in practice. This study aims to identify the information needs, the frequency of their occurrence, the sources of information used to satisfy the needs, and the perception of practitioners regarding the usefulness of the sources currently used. For this purpose, a literature review was conducted to aggregate the current state of understanding in this area. We built on the results of the literature review and developed further insights through in-depth interviews with 17 practitioners. We further triangulated the findings from these two investigations by conducting a web-based survey (with 83 completed responses). Based on the results, we infer that information regarding product design, product architecture and requirements gathering are the most frequently faced needs. Software practitioners mostly use blogs, community forums, product documentation, and discussion with colleagues to address their information needs.

  • 219.
    Kaliniak, Paweł
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering. Wrocław University of Technology.
    Conversion of SBVR Behavioral Business Rules to UML Diagrams: Initial Study of Automation2016Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Context. Automation of conversion of business rules into source code in software development project can reduce time and effort in phase of development. In this thesis we discuss automatic conversion of behavioral business rules defined in Semantics of Business Vocabulary and Rules (SBVR) standard, to fragments of Unified Modeling Language diagrams: activity, sequence and state machine. It is conversion from Computation Independent Model (CIM) into Platform Independent Model (PIM) levels defined by Model Driven Architecture (MDA). PIM in MDA can be further transformed into Platform Specific Model which is prepared for source code generation.

    Objectives. Aim of this thesis is to initially explore field of automatic conversion of behavioral business rules - conversion from SBVR representation to fragments of UML diagrams. It is done by fulfilment of objectives defined as following:

    -To find out properties of SBVR behavioral rule which ensure that the rule can be automatically converted to parts of UML behavioral diagrams (activity, sequence, state machine).

    -To propose mapping of SBVR contructs to constructs of UML behavioral diagrams.

    -To prepare guidelines which help to specify SBVR behavioral business rules in such way that they can be automatically transformed into fragments of selected UML behavioral diagrams.

    Methods. Expert opinion and case study were applied. Business analysts from industry and academia were asked to convert set of SBVR behavioral business rules to UML behavioral diagrams: activity, sequence and state machine. Analysis of the set of business rules and their conversions to UML diagrams was basis for fulfilment of objectives.

    Results. 2 syntax and 3 semantic properties were defined. Conversion rules which define mapping for SBVR behavioral business rules constructs to UML constructs were defined: 5 rules for conversion to activity diagram, 6 for conversion to sequence diagram, 5 for conversion to state machine diagram. 6 guidelines which are intended to help in specification of behavioral business rules that can be automatically transformed into UML diagrams according to the presented conversion rules, were defined.

    Conclusions. Research performed in this thesis is initial study of automatic conversion of behavioral business rules from SBVR notation to UML behavioral diagrams notation. Validation of defined properties, conversion rules and guidelines can be done in industry as future work. Re-execution of research for larger and more diverse set/sets of behavioral business rules taken from industry projects, sufficiently broad access to business analysts from industry and academia could help to improve results.

  • 220. Kalinowski, Marcos
    et al.
    Mendes, Emilia
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Travassos, G.H.
    An industry ready defect causal analysis approach exploring Bayesian networks2014In: Lecture Notes in Business Information Processing, Vienna: Springer , 2014, Vol. 166, p. 12-33Conference paper (Refereed)
    Abstract [en]

    Defect causal analysis (DCA) has shown itself an efficient means to improve the quality of software processes and products. A DCA approach exploring Bayesian networks, called DPPI (Defect Prevention-Based Process Improvement), resulted from research following an experimental strategy. Its conceptual phase considered evidence-based guidelines acquired through systematic reviews and feedback from experts in the field. Afterwards, in order to move towards industry readiness the approach evolved based on results of an initial proof of concept and a set of primary studies. This paper describes the experimental strategy followed and provides an overview of the resulting DPPI approach. Moreover, it presents results from applying DPPI in industry in the context of a real software development lifecycle, which allowed further comprehension and insights into using the approach from an industrial perspective.

  • 221.
    Karlsson, Jan
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Eriksson, Patrik
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    How the choice of Operating System can affect databases on a Virtual Machine2014Independent thesis Basic level (degree of Bachelor)Student thesis
    Abstract [en]

    As databases grow in size, the need for optimizing databases is becoming a necessity. Choosing the right operating system to support your database becomes paramount to ensure that the database is fully utilized. Furthermore with the virtualization of operating systems becoming more commonplace, we find ourselves with more choices than we ever faced before. This paper demonstrates why the choice of operating system plays an integral part in deciding the right database for your system in a virtual environment. This paper contains an experiment which measured benchmark performance of a Database management system on various virtual operating systems. This experiment shows the effect a virtual operating system has on the database management system that runs upon it. These findings will help to promote future research into this area as well as provide a foundation on which future research can be based upon.

  • 222.
    Kartheek arun sai ram, chilla
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Kavya, Chelluboina
    Investigating Metrics that are Good Predictors of Human Oracle Costs An Experiment2017Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Context. Human oracle cost, the cost associated in estimating the correctness of the output for the given test inputs is manually evaluated by humans and this cost is significant and is a concern in the software test data generation field. This study has been designed in the context to assess metrics that might predict human oracle cost.

    Objectives. The major objective of this study is to address the human oracle cost, for this the study identifies the metrics that are good predictors of human oracle cost and can further help to solve the oracle problem. In this process, the identified suitable metrics from the literature are applied on the test input, to see if they can help in predicting the correctness of the output for the given test input. Methods. Initially a literature review was conducted to find some of the metrics that are relevant to the test data. Besides finding the aforementioned metrics, our literature review also tries to find out some possible code metrics that can be ap- plied on test data. Before conducting the actual experiment two pilot experiments were conducted. To accomplish our research objectives an experiment is conducted in the BTH university with master students as sample population. Further group interviews were conducted to check if the participants perceive any new metrics that might impact the correctness of the output. The data obtained from the experiment and the interviews is analyzed using linear regression model in SPSS suite. Further to analyze the accuracy vs metric data, linear discriminant model using SPSS pro- gram suite was used.

    Results.Our literature review resulted in 4 metrics that are suitable to our study. As our test input is HTML we took HTML depth, size, compression size, number of tags as our metrics. Also, from the group interviews another 4 metrics are drawn namely number of lines of code and number of <div>, anchor <a> and paragraph <p> tags as each individual metric. The linear regression model which analyses time vs metric data, shows significant results, but with multicollinearity effecting the result, there was no variance among the considered metrics. So, the results of our study are proposed by adjusting the multicollinearity. Besides, the above analysis, linear discriminant model which analyses accuracy vs metric data was conducted to predict the metrics that influences accuracy. The results of our study show that metrics positively correlate with time and accuracy.

    Conclusions. From the time vs metric data, when multicollinearity is adjusted by applying step-wise regression reduction technique, the program size, compression size and <div> tag are influencing the time taken by sample population. From accuracy vs metrics data number of <div> tags and number of lines of code are influencing the accuracy of the sample population. 

  • 223. Kashfi, P.
    et al.
    Feldt, Robert
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Nilsson, A.
    Berntsson Svensson, Richard
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    A conceptual ux-aware model of requirements2016In: Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), Springer, 2016, Vol. 9856 LNCS, p. 234-245Conference paper (Refereed)
    Abstract [en]

    User eXperience (UX) is becoming increasingly important for success of software products. Yet, many companies still face various challenges in their work with UX. Part of these challenges relate to inadequate knowledge and awareness of UX and that current UX models are commonly not practical nor well integrated into existing Software Engineering (SE) models and concepts. Therefore, we present a conceptual UX-aware model of requirements for software development practitioners. This layered model shows the interrelation between UX and functional and quality requirements. The model is developed based on current models of UX and software quality characteristics. Through the model we highlight the main differences between various requirement types in particular essentially subjective and accidentally subjective quality requirements. We also present the result of an initial validation of the model through interviews with 12 practitioners and researchers. Our results show that the model can raise practitioners’ knowledge and awareness of UX in particular in relation to requirement and testing activities. It can also facilitate UX-related communication among stakeholders with different backgrounds. © IFIP International Federation for Information Processing 2016.

  • 224.
    Kashfi, Pariya
    et al.
    Chalmers; Gothenburg Univ, SWE.
    Feldt, Robert
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Nilsson, Agneta
    Chalmers; Gothenburg Univ, SWE.
    Berntsson Svensson, Richard
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Evidence-based Timelines for User eXperience Software Process Improvement Retrospectives2016In: 2016 42ND EUROMICRO CONFERENCE ON SOFTWARE ENGINEERING AND ADVANCED APPLICATIONS (SEAA), IEEE Computer Society, 2016, p. 59-62Conference paper (Refereed)
    Abstract [en]

    We performed a retrospective meeting at a case company to reflect on its decade of Software Process Improvement (SPI) activities for enhancing UX integration. We supported the meeting by a pre-generated timeline of the main activities. This approach is a refinement of a similar approach that is used in Agile projects to improve effectiveness and decrease memory bias of retrospective meetings. The method is evaluated through gathering practitioners' view using a questionnaire. We conclude that UX research and practice can benefit from the SPI body of knowledge. We also argue that a cross-section evidence-based timeline retrospective meeting is useful for enhancing UX work in companies, especially for identifying and reflecting on `organizational issues'. This approach also provides a cross-section longitudinal overview of the SPI activities that cannot easily be gained in other common SPI learning approaches.

  • 225.
    Kasraee, Pezhman
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Lin, Chong
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Readability of Method Chains: A Controlled Experiment with Eye Tracking Approach2016Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Context. Source codes with lower level of readability impose a higher cost to software maintainability. Research also exposed the importance of readability as a vital factor on software maintainability. Therefore, readability has recently investigated by software engineers. Readability involves human’s interactions making the study on readability difficult. In this study, we explore the readability of method chain and non-method chain in Java source codes with the means of an eye tracking device as a newly-introduced approach.

    Objectives. Objectives of this study are: 1. we investigated if the number of methods in a method chain affects the readability of Java source codes, and 2. we investigated the readability of two programming styles: method chain and non-method chain.

    Methods. To achieve both objectives of this study, two controlled experiments were conducted inside a laboratory with the means of an eye tracker device. In the first experiment, treatment groups were exposed separately to method chains with different number of methods. In the second experiment, the treatment groups were exposed separately to two different programming styles: method chain and non-method chain.

    Results. Participants of this study were students with the average age of 24.56 years old. Fixation durations of participants’ reading was measured in millisecond (ms). In the first experiment, the average of fixation durations per method with lower number of methods was 600.93 ms, and with higher number of methods was 411.53 ms. In the second experiment, the average of fixation durations per method for non-method chain style was 357.94 ms, and for method chain style was 411.53 ms.

    Conclusions. In the first experiment, the analysis of fixation durations indicates that method chains with higher number of methods are slightly more readable. Analysis of t-test (t − value = −0.5121, significance level = 0.05, and two-tailed prob-ability) confirms that the results of the first experiment does not show a significant difference at p < 0.05. The results of the second experiment show that non-method chain style is slightly more readable in comparison with method chain style. Analysis of t-test (t − value = 3.1675, significance level = 0.05, and two-tailed probability) confirms that the results of the second experiment show a significant difference at p < 0.05.

  • 226.
    Kelkkanen, Viktor
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Implementation and Evaluation of Positional Voice Chat in a Massively Multiplayer Online Role Playing Game2016Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Computer games, especially Massively Multiplayer Online Role Playing Games, have elements where communication between players is of great need. This communication is generally conducted through in-game text chats, in-game voice chats or external voice programs. In-game voice chats can be constructed to work in a similar way as talking does in real life. When someone talks, anyone close enough to that person can hear what is said, with a volume depending on distance. This is called positional or spatial voice chat in games. This differs from the commonly implemented voice chat where participants in conversations are statically defined by a team or group belonging. Positional voice chat has been around for quite some time in games and it seems to be of interest for a lot of users, despite this, it is still not very common.

    This thesis investigates impacts of implementing a positional voice chat in the existing MMORPG Mortal Online by Star Vault. How is it built, what are the costs, how many users can it support and what do the users think of it? These are some of the questions answered within this project.

    The design science research method has been selected as scientific method. A product in form of a positional voice chat library has been constructed. This library has been integrated into the existing game engine and its usage has been evaluated by the game’s end users.

    Results show a positional voice system that in theory supports up to 12500 simultaneous users can be built from scratch and be patched into an existing game in less than 600 man-hours. The system needs third-party libraries for threading, audio input/output, audio compression, network communication and mathematics. All libraries used in the project are free for use in commercial products and do not demand code using them become open source.

    Based on a survey taken by more than 200 users, the product received good ratings on Quality of Experience and most users think having a positional voice chat in a game like Mortal Online is important. Results show a trend of young and less experienced users giving the highest average ratings on quality, usefulness and importance of the positional voice chat, suggesting it may be a good tool to attract new players to a game.

  • 227.
    Khan, Rizwan Bahrawar
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Comparative Study of Performance Testing Tools: Apache JMeter and HP LoadRunner2016Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Software Testing plays a key role in Software Development. There are two approaches to software testing i.e. Manual Testing and Automated Testing which are used to detect the faults. There are numbers of automated software testing tools with different purposes but it is always a problem to select a software testing tool according to the needs. In this research, the author compares two software testing tools i.e. Apache JMeter and HP LoadRunner to determine their usability and efficiency. To compare the tools, different parameters were selected which guide the tool evaluation process. To complete the objective of the research, a scenario-based survey is conducted and two different web applications were tested. From this research, it is found that Apache JMeter got an edge over HP Loadrunner in different aspects which include installation, interface and learning.

  • 228. Khurum, Mahvish
    et al.
    Fricker, Samuel
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Gorschek, Tony
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    The Contextual Nature of Innovation: An Empirical Investigation of Three Software Intensive Products2015In: Information and Software Technology, ISSN 0950-5849, E-ISSN 1873-6025, Vol. 57, no 1Article in journal (Refereed)
    Abstract [en]

    Context: New products create significant opportunities for differentiation and competitive advantage. To increase the chances of new product success, a universal set of critical activities and determinants have been recommended. Some researchers believe, however, that these factors are not universal, but are contextual. Objective: This paper reports innovation processes followed to develop three software intensive products for understanding how and why innovation practice is dependent on innovation context. Method: This paper reports innovation processes and practices with an in-depth multi-case study of three software product innovations from Ericsson, IBM, and Rorotika. It describes the actual innovation processes followed in the three cases and discusses the observed innovation practice and relates it to state-of-the-art. Results: The cases point to a set of contextual factors that influence the choice of innovation activities and determinants for developing successful product innovations. The cases provide evidence that innovation practice cannot be standardized, but is contextual in nature. Conclusion: The rich description of the interaction between context and innovation practice enables future investigations into contextual elements that influence innovation practice, and calls for the creation of frameworks enabling activity and determinant selection for a given context – since one size does not fit all.

  • 229. Khurum, Mahvish
    et al.
    Petersen, Kai
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Gorschek, Tony
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Extending value stream mapping through waste definition beyond customer perspective2014In: Journal of Software: Evolution and Process, ISSN 2047-7481, Vol. 26, no 12, p. 1074-1105Article in journal (Refereed)
    Abstract [en]

    Value Stream Mapping is one of the several Lean practices, which has recently attracted interest in the software engineering community. In other contexts (such as military, health, production), Value Stream Mapping has achieved considerable improvements in processes and products. The goal is to also leverage on these benefits in the software intensive product development context. The primary contribution is that we are extending the definition of waste to fit in the software intensive product development context. As traditionally in Value Stream Mapping everything that is not considered valuable is waste, we do this practically by looking at value beyond the customer perspective, and using the Software Value Map. A detailed illustration, via application in an industrial case at Ericsson AB, demonstrates usability and usefulness of the proposed extension. The case study results consist of two parts. First, the instantiation and motivations for selecting certain strategies have been provided. Second, the outcome of the value stream map is described in detail. Overall, the conclusion is that this case study indicates that Value Stream Mapping and the integration with the Software Value Map is useful in a software intensive product development context. In a retrospective the value stream approach was perceived positively by the practitioners with respect to process and outcome.

  • 230.
    Kihlström, Kalle
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Responsive Images: A comparison of responsive image techniques with a focus on performance2016Independent thesis Basic level (degree of Bachelor), 10 credits / 15 HE creditsStudent thesis
    Abstract [en]

    This bachelor thesis dives into the topic of responsive images on the web. With more and more devices different devices accessing the web all with different conditions, serving the right image for each and every device is an important matter. This thesis looks into the topic and compares a few available techniques that potentially could solve this problem of providing the right image.

    This thesis will include a literature study as well as an experiment. When both these parts are done they will be presented, analyzed and summarized for the reader. The experiment is a performance benchmark of two different responsive image techniques, a non responsive image alternative is also tested in order to have something to evaluate the responsive image techniques with and see how big of a difference the techniques can make.

    Ultimately both responsive image techniques put through the experiment performed relatively even and both showed huge improvements in terms of performance over the non responsive alternative.

  • 231.
    Kitamura, Takashi
    et al.
    National Institute of Advanced Industrial Science and Technology, JPN.
    Alégroth, Emil
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Ramler, Rudolf
    Software Competence Center Hagenber, AUT.
    Industry-Academia Collaboration in Software Testing: An Overview of TAIC PART 20172017In: Proceedings - 10th IEEE International Conference on Software Testing, Verification and Validation Workshops, ICSTW 2017, Institute of Electrical and Electronics Engineers Inc. , 2017, p. 42-43Conference paper (Refereed)
    Abstract [en]

    Collaboration between industry and academia in software testing leads to improvement and innovation in industry, and it is the basis for achieving transferable and empirically evaluated results. Thus, the aim of TAIC PART is to forge collaboration between industry and academia on the challenging and exciting problem of real-world software testing. The workshop is promoted by representatives of both industry and academia, bringing together industrial software engineers and testers with researchers working on theory and practice of software testing. We present an overview of the 12th Workshop on Testing: Academia-Industry Collaboration, Practice and Research Techniques (TAIC PART 2017) and its contributions. © 2017 IEEE.

  • 232. Kittlaus, Hans-Bernd
    et al.
    Fricker, Samuel
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Software Product Management: The ISPMA-Compliant Study Guide and Handbook2017Book (Other academic)
    Abstract [en]

    This book gives a comprehensive overview on Software Product Management (SPM) for beginners as well as best practices, methodology and in-depth discussions for experienced product managers. This includes product strategy, product planning, participation in strategic management activities and orchestration of the functional units of the company. The book is based on the results of the International Software Product Management Association (ISPMA) which is led by a group of SPM experts from industry and research with the goal to foster software product management excellence across industries. This book can be used as textbook for ISPMA-based education and as guide for anybody interested in SPM as one of the most exciting and challenging disciplines in the business of software.

  • 233.
    Klotins, Erik
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Unterkalmsteiner, Michael
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Gorschek, Tony
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Software engineering knowledge areas in startup companies: A mapping study2015Conference paper (Refereed)
    Abstract [en]

    Background – Startup companies are becoming important suppliers of innovative and software intensive products. The failure rate among startups is high due to lack of resources, immaturity, multiple influences and dynamic technologies. However, software product engineering is the core activity in startups, therefore inadequacies in applied engineering practices might be a significant contributing factor for high failure rates. Aim – This study identifies and categorizes software engineering knowledge areas utilized in startups to map out the state-of-art, identifying gaps for further research. Method – We perform a systematic literature mapping study, applying snowball sampling to identify relevant primary studies. Results – We have identified 54 practices from 14 studies. Although 11 of 15 main knowledge areas from SWEBOK are covered, a large part of categories is not. Conclusions – Existing research does not provide reliable support for software engineering in any phase of a startup life cycle. Transfer of results to other startups is difficult due to low rigor in current studies. © Springer International Publishing Switzerland 2015.

  • 234.
    Klotins, Eriks
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Software start-ups through an empirical lens: Are start-ups snowflakes?2018In: CEUR Workshop Proceedings / [ed] Wang X.,Munch J.,Suominen A.,Bosch J.,Jud C.,Hyrynsalmi S., CEUR-WS , 2018Conference paper (Refereed)
    Abstract [en]

    Most of the existing research assume that software start-ups are “unique” and require a special approach to software engineering. The uniqueness of start-ups is often justified by the scarcity of resources, time pressure, little operating history, and focus on innovation. As a consequence, most research on software start-ups concentrate on exploring the start-up context and are overlooking the potential of transferring the best engineering practices from other contexts to start-ups. In this paper, we examine results from an earlier mapping study reporting frequently used terms in literature used to characterize start-ups. We analyze how much empirical evidence support each characteristic, and how unique each characteristic is in the context of innovative, market-driven, software-intensive product development. Our findings suggest that many of the terms used to describe startups originate from anecdotal evidence and have little empirical backing. Therefore, there is a potential to revise the original start-up characterization. In conclusion, we identify three potential research avenues for further work: a) considering shareholder perspective in product decisions, b) providing support for software engineering in rapidly growing organizations, and c) focusing on transferring the best engineering practices from other contexts to start-ups. © 2018 CEUR-WS. All rights reserved.

  • 235.
    Klotins, Eriks
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Using the Case Survey Method to Explore Engineering Practices in Software Start-Ups2017In: Proceedings - 2017 IEEE/ACM 1st International Workshop on Software Engineering for Startups, SoftStart 2017, Institute of Electrical and Electronics Engineers (IEEE), 2017, p. 24-26Conference paper (Refereed)
    Abstract [en]

    Software start-ups are a new and relatively unexplored field for software engineering researchers. However, conducting empirical studies with start-ups is difficult. Start-ups produce very little 'hard' evidence, thus data collection methods are limited to interviews and surveys. These methods come with their limitations, namely interview studies are not scalable to a large number of companies, and surveys are not generally applicable for exploratory studies. In this paper we present of a hybrid research method aimed to provide a compromise between breadth of a survey and depth of an interview study. The case survey method enables both qualitative and quantitative analysis of studied cases. We adapt the case survey method for use in primary studies and report experience with its application. The case survey method was successfully applied to design and launch a large scale study into engineering aspects of start-ups. We conclude that the case survey method is a promising research method to launch exploratory studies into large samples of start-up companies. © 2017 IEEE.

  • 236.
    Klotins, Eriks
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Unterkalmsteiner, Michael
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Chatzipetrou, Panagiota
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Gorschek, Tony
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Prikladnicki, Rafael
    Pontificia Universidade Catolica do Rio Grande do Sul, BRA.
    Tripathi, Nirnaya
    Oulun Yliopisto, Oulu, FIN.
    Pompermaier, Leandro Bento
    Pontificia Universidade Catolica do Rio Grande do Sul, BRA.
    Exploration of technical debt in start-ups2018In: Proceedings - International Conference on Software Engineering, IEEE Computer Society , 2018, p. 75-84Conference paper (Refereed)
    Abstract [en]

    Context: Software start-ups are young companies aiming to build and market software-intensive products fast with little resources. Aiming to accelerate time-to-market, start-ups often opt for ad-hoc engineering practices, make shortcuts in product engineering, and accumulate technical debt. Objective: In this paper we explore to what extent precedents, dimensions and outcomes associated with technical debt are prevalent in start-ups. Method: We apply a case survey method to identify aspects of technical debt and contextual information characterizing the engineering context in start-ups. Results: By analyzing responses from 86 start-up cases we found that start-ups accumulate most technical debt in the testing dimension, despite attempts to automate testing. Furthermore, we found that start-up team size and experience is a leading precedent for accumulating technical debt: larger teams face more challenges in keeping the debt under control. Conclusions: This study highlights the necessity to monitor levels of technical debt and to preemptively introduce practices to keep the debt under control. Adding more people to an already difficult to maintain product could amplify other precedents, such as resource shortages, communication issues and negatively affect decisions pertaining to the use of good engineering practices. © 2018 ACM.

  • 237.
    Klotins, Eriks
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Unterkalmsteiner, Michael
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Chatzipetrou, Panagiota
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Gorschek, Tony
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Prikladniki, Rafael
    Pontificia Universidade Catolica do Rio Grande do Sul, BRA.
    Tripathi, Nirnaya
    Oulun Yliopisto, FIN.
    Pompermaier, Leandro Bento
    Pontificia Universidade Catolica do Rio Grande do Sul, BRA.
    A progression model of software engineering goals, challenges, and practices in start-ups2019In: IEEE Transactions on Software Engineering, ISSN 0098-5589, E-ISSN 1939-3520Article in journal (Refereed)
    Abstract [en]

    Context: Software start-ups are emerging as suppliers of innovation and software-intensive products. However, traditional software engineering practices are not evaluated in the context, nor adopted to goals and challenges of start-ups. As a result, there is insufficient support for software engineering in the start-up context. IEEE

  • 238.
    Klotins, Eriks
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Unterkalmsteiner, Michael
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Gorschek, Tony
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Software Engineering Anti-Patterns in Start-Ups2019In: IEEE Software, ISSN 0740-7459, E-ISSN 1937-4194, Vol. 36, no 2, p. 118-126Article in journal (Refereed)
    Abstract [en]

    Software start-up failures are often explained with a poor business model, market issues, insufficient funding, or simply a bad product idea. However, inadequacies in software engineering are relatively unexplored and could be a significant contributing factor to the high start-up failure rate. In this paper we present the analysis of 88 start-up experience reports, revealing three anti-patterns associated with start-up progression phases. The anti-patterns address challenges of releasing the first version of the product, attracting customers, and expanding the product into new markets. The anti-patterns show that challenges and failure scenarios that appear to be business or market related are, at least partially, rooted in engineering inadequacies.

  • 239.
    Klotins, Eriks
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Unterkalmsteiner, Michael
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Gorschek, Tony
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Software engineering in start-up companies: An analysis of 88 experience reports2019In: Journal of Empirical Software Engineering, ISSN 1382-3256, E-ISSN 1573-7616, Vol. 24, no 1, p. 68-102Article in journal (Refereed)
    Abstract [en]

    Context: Start-up companies have become an important supplier of innovation and software-intensive products. The flexibility and reactiveness of start-ups enables fast development and launch of innovative products. However, a majority of software start-up companies fail before achieving any success. Among other factors, poor software engineering could be a significant contributor to the challenges experienced by start-ups. However, the state-of-practice of software engineering in start-ups, as well as the utilization of state-of-the-art is largely an unexplored area. Objective: In this study we investigate how software engineering is applied in start-up context with a focus to identify key knowledge areas and opportunities for further research. Method: We perform a multi-vocal exploratory study of 88 start-up experience reports. We develop a custom taxonomy to categorize the reported software engineering practices and their interrelation with business aspects, and apply qualitative data analysis to explore influences and dependencies between the knowledge areas. Results: We identify the most frequently reported software engineering (requirements engineering, software design and quality) and business aspect (vision and strategy development) knowledge areas, and illustrate their relationships. We also present a summary of how relevant software engineering knowledge areas are implemented in start-ups and identify potentially useful practices for adoption in start-ups. Conclusions: The results enable a more focused research on engineering practices in start-ups. We conclude that most engineering challenges in start-ups stem from inadequacies in requirements engineering. Many promising practices to address specific engineering challenges exists, however more research on adaptation of established practices, and validation of new start-up specific practices is needed. © 2018 The Author(s)

  • 240.
    Klotins, Eriks
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Unterkalmsteiner, Michael
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Gorschek, Tony
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Software-intensive product engineering in start-ups: a taxonomy2018In: IEEE Software, ISSN 0740-7459, E-ISSN 1937-4194, Vol. 35, no 4, p. 44-52Article in journal (Refereed)
    Abstract [en]

    Software start-ups are new companies aiming to launch an innovative product to mass markets fast with minimal resources. However a majority of start-ups fail before realizing their potential. Poor software engineering, among other factors, could be a significant contributor to the challenges experienced by start-ups.

    Very little is known about the engineering context in start-up companies. On the surface, start-ups are characterized by uncertainty, high risk and minimal resources. However, such characterization is not granular enough to support identification of specific engineering challenges and to devise start-up specific engineering practices.

    The first step towards understanding on software engineering in start-ups is definition of the Start-up Context Map - a taxonomy of engineering practices, environment factors and goals influencing the engineering process. Goal of the Start-up Context Map is to support further research on the field and to serve as an engineering decision support tool for start-ups. 

  • 241. Kocaguneli, Ekrem
    et al.
    Menzies, Tim
    Mendes, Emilia
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Transfer learning in effort estimation2015In: Journal of Empirical Software Engineering, ISSN 1382-3256, E-ISSN 1573-7616, Vol. 20, no 3, p. 813-843Article in journal (Refereed)
    Abstract [en]

    When projects lack sufficient local data to make predictions, they try to transfer information from other projects. How can we best support this process? In the field of software engineering, transfer learning has been shown to be effective for defect prediction. This paper checks whether it is possible to build transfer learners for software effort estimation. We use data on 154 projects from 2 sources to investigate transfer learning between different time intervals and 195 projects from 51 sources to provide evidence on the value of transfer learning for traditional cross-company learning problems. We find that the same transfer learning method can be useful for transfer effort estimation results for the cross-company learning problem and the cross-time learning problem. It is misguided to think that: (1) Old data of an organization is irrelevant to current context or (2) data of another organization cannot be used for local solutions. Transfer learning is a promising research direction that transfers relevant cross data between time intervals and domains.

  • 242.
    Kollu, Ravichandra Kumar
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Requirements scoping outside product lines: Systematic Literature Review and Survey2016Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Context: Scoping is admitted as a key activity in Market Driven Software Development for handling the constant inflow of requirements. It helps in identifying the features, domains and products which help for gaining economic benefits in Software Product Line (SPL) development. Beyond SPL, managing the scope of the project is a major risk for project management. Continuously changing scope creates a congestion state to handle the requirements inflow which causes negative consequences like scope scrap and scope creep. However, for managing negative consequences caused due to requirements volatility depicts a need for work on requirements scoping outside the product line. 

    Objectives: In this study, an exploratory work is carried to identify the literature and industrial perspectives on requirements scoping outside the product line. The main objectives are:

    • Identifying the state of literature of requirements scoping outside product line and variability analysis.
    • To explore the industrial practice on requirements scoping.
    • Suggesting recommendations in improving the scoping process based on the literature and survey. 

    Methods: Systematic Literature Review (SLR) using snowballing procedure was conducted to identify the literature available on requirements scoping outside the product line. Quality assessment using rigor and relevance was performed to find the trustworthiness of the papers obtained through SLR. The data obtained through SLR was analyzed using narrative analysis. Furthermore, an industrial survey was performed using web questionnaire to identify the industrial perspective on requirements scoping. Statistical analysis was performed for analyzing the data obtained from survey. 

    Results: 23 relevant papers were identified through SLR. The results were categorized as definitions obtained, phenomena, challenges and methods/tools identified. From the finding of SLR, an industrial survey was conducted, which has obtained 93 responses. The challenges that were identified through literature were validated through survey and are prioritized. Moreover, the study identified additional challenges that are not discussed in the literature. Additionally, the approaches followed in organizations while scoping the requirements were identified through the survey.

    Conclusions: This study identified that scope creep is the most frequently occurring phenomenon that organizations are facing throughout the lifecycle of the project. In addition project delays, quality issues and project cost were identified as the most occurring scoping associated challenges. Moreover, scoping activity was identified as the continuous activity which changes significantly throughout the lifecycle. Finally, suggestions were given for improving the scoping process.

  • 243.
    Kolonko, Kamil
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Performance comparison of the most popular relational and non-relational database management systems2018Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Context. Database is an essential part of any software product. With an emphasis on application performance, database efficiency becomes one of the key factors to analyze in the process of technology selection. With a development of new data models and storage technologies, the necessity for a comparison between relational and non-relational database engines is especially evident in the software engineering domain.

    Objectives. This thesis investigates current knowledge on database performance measurement methods, popularity of relational and non-relational database engines, defines characteristics of databases, approximates their average values and compares the performance of two selected database engines.Methods. In this study a number of research methods are used, including literature review, a review of Internet sources, and an experiment. Literature datasets used in the research incorporate over 100 sources including IEEE Xplore and ACM Digital Library. YCSB Benchmark was used as a direct performance comparison method in an experiment to compare OracleDB’s and MongoDB’s performance.

    Results. A list of database performance measurement methods has been defined as a result of the literature review. Two most popular database management engines, one relational and one non-relational has been identified. A set of database characteristics and a database performance comparison methodology has been identified. Performance of two selected database engines has been measured and compared.

    Conclusions. Performance comparison between two selected database engines indicated superior results for MongoDB under the experimental conditions. This database proved to be more efficient in terms of average operation latency and throughput for each of the measured workloads. OracleDB however, presented stable results in each of the category leaving the final choice of database to the specifics of a software engineering project. Activities required for the definition of database performance comparison methodology proved to be challenging and require study extension.

  • 244.
    Kommineni, Mohanarajesh
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Parvathi, Revanth
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    RISK ANALYSIS FOR EXPLORING THE OPPORTUNITIES IN CLOUD OUTSOURCING2013Independent thesis Advanced level (degree of Master (Two Years))Student thesis
    Abstract [en]

    Context: Cloud Outsourcing is a new form of outsourcing which is not more under implementation and yet to be implemented. It is a form of outsourcing in which software organizations outsource the work to e-freelancers available throughout the world using cloud services via the Internet. Software organizations handovers the respective task to the cloud and from the cloud e-freelancers undertake the development of task and then return back the finished task to the cloud. Organizations recollect the finished task from the cloud and verify it and then pay to the e-freelancer. Objectives: The aim of this study is to identify the sequence of activities involved during the entire process of cloud outsourcing and to find out the risks which are likely to be occurred during the implementation of this process. To prioritize the elicitated risks according to their probability of occurrence, impact and cost required to mitigate the corresponding risk. Methods: Data is collected by literature review and then the data is synthesized. On the other side interviews with practitioners are conducted to know the activities involved and to find out the risks that are likely to be occurred during the implementation of cloud outsourcing. After this, a survey is conducted in order to prioritize the risks and a standard risk analysis is conducted to know the risks which are likely to be occurred. Literature review is done using four databases including the literature from the year 1990 to till date. Results: Totally we have identified 21 risks that are likely to be occurred and 8 activities so far. By performing risk analysis we have presented the risks, which should be considered first and relevant counter measures are suggested to overcome them.

  • 245.
    Koppula, Thejendar Reddy
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Regression Testing Goals and Measures: An industrial approach2018Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Context: When a software is modified, regression testing is performed to ensure the behaviour of software is not affected because of those modifications. Due to frequent modifications, the regression testing became challenging. Although there are many regression testing techniques are developed in the research, they are not incorporating in the industry. This is because of the differences in regression testing goals and measures in research and industry. The current context of this study is to identify the regression testing goals and measures in the research and industry perspectives and to find the differences and similarities in both perspectives.

    Objectives: The primary objective of this study is to identify the similarities and differences in regression testing goals and measure from research and industry perspectives. Additionally, in this study, a general adapted goals list is presented.

    Methods: A mixed method approach is used for this study. A literature review has been used to identify the regression testing goals and measures in research. A survey is used to identify the regression testing goals and measures in the industry. Semi-structured interviews and online questionnaire are used as data collection methods in the survey. Thematic analysis and descriptive statistics are used as data analysis methods for the qualitative and quantitative data.

    Results: A literature review is conducted using 33 research articles. In the survey, the data is collected from 11 semi-structured interviews which are validated with 45 responses from an online questionnaire. A total of 6 regression testing goals are identified from the literature review and 8 goals are identified in the survey respectively. The measures used to evaluate these goals are identified and tabulated.

    Conclusions: From the results, we observed the similarities and differences in the regression testing goals and measures in industry and research. There are few similarities in goals but the major difference is the priority order of these goals. There are various measures used in research but very fewer measures are incorporating in the industry. The respondents from the survey implied that there is a need for generic adaptive goals. Further, a general list of goals is presented.

    Keywords: Regression, Regression testing, Goals, Objectives, Measures, Metrics.

  • 246.
    Korraprolu, Srinivasa Abhilash
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Evaluation of the Relevance of Agile Maturity Models in the Industry: A Case Study2018Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Background.

    Over the years, agile software development has become increasingly popular in the software industry. One of the reasons is that agile development addressed the needs of the organisations better than the traditional models, such as the waterfall model. However, the textbook version of agile development still leaves something to be desired. This could be learnt by observing the implementation of agile methods/frameworks in the industry. The teams often customize agile methods to suit their context-specific needs. When teams in the industry decide to adopt the agile way of working, they are confronted by a choice¾either they have to implement all the agile practices at a time or adopt them over the time. The former choice has shown to come with risks and, therefore, was found that practitioners generally preferred the latter. However, agile practices are not independent, they have dependencies amongst them. A new approach to agile development emerged in the recent years known as Agile Maturity Models (AMMs). AMMs claim to offer a better path to agile adoption. In AMMs, the practices are typically introduced gradually in a particular order. However, these AMMs are multifarious and haven’t been sufficiently evaluated¾especially in the industry practice. Thus, they need to be evaluated in order to understand their relevance in the industry.

     

    Objectives.

    The goal is to evaluate the relevance of AMMs in the industry. By finding relevant AMMs, they could be used to alleviate the formation of agile teams and contribute toward their smoother functioning. By finding those that aren’t, this research could act as a cautionary to those practitioners who could potentially implement these AMMs and risk failure. The objectives are: identifying the agile practice dependencies in the AMMs; finding the agile practice dependencies in an agile team by conducting a case study in the industry; comparing the dependencies from the case study with those in the AMMs.


     

    Methods.

    The agile maturity models were identified and analysed. A case study was conducted on an agile team to identify the dependencies between the agile practices in the industry practice. Semi-structured interviews were conducted with members of the agile team. Qualitative coding was used to analyse the collected data. The dependencies from the case study were compared with the AMMs to achieve the aim of this research.

     

    Results.

    It was found that dependencies between individual agile practices in the AMMs were almost never possible to be found. However, practices suggested in each maturity levels were derived. Hence, the dependencies were found in the maturity-level level. From the case study, 20 agile practice dependencies were found. 7/8 AMMs were found to be not relevant. 1 AMM couldn’t be evaluated as it heavily relied on the practitioner’s choices.

     

    Conclusions.

    The researchers could use the evaluation method presented in this thesis to conduct more such evaluations. By doing so, the dynamics present in the industry teams could be better understood. On their basis, relevant AMMs could be developed in the future. Such AMMs could help practitioners leverage agile development.

  • 247. Kosti, Makrina Viola
    et al.
    Feldt, Robert
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Angelis, Lefteris
    Archetypal personalities of software engineers and their work preferences: a new perspective for empirical studies2016In: Journal of Empirical Software Engineering, ISSN 1382-3256, E-ISSN 1573-7616, Vol. 21, no 4, p. 1509-1532Article in journal (Refereed)
  • 248.
    Koutsoumpos, Vasileios
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Marinelarena, Iker
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Agile Methodologies and Software Process Improvement Maturity Models, Current State of Practice in Small and Medium Enterprises2013Independent thesis Advanced level (degree of Master (One Year))Student thesis
    Abstract [en]

    Abstract—Background: Software Process Improvement (SPI) maturity models have been developed to assist organizations to enhance software quality. Agile methodologies are used to ensure productivity and quality of a software product. Amongst others they are applied in Small and Medium – sized Enterprises (SMEs). However, little is known about the combination of Agile methodologies and SPI maturity models regarding SMEs and the results that could emerge, as all the current SPI models are addressed to larger organizations and all these improvement models are difficult to be used by Small and Medium – sized firms. Combinations of these methodologies could lead to improvement in the quality of the software products, better project management methodologies and organized software development framework. Objectives: The aim of this study is to identify the main Agile methodologies and SPI maturity models applied in SMEs, the combinations of these methodologies, and the results that could emerge. Through these combinations, new software development frameworks are proposed. What is more, the results of this study can be used as a guide with the appropriate combination for each SME, as a better project management methodology or as improvement in the current software engineering practices. Methods: A Systematic Literature Review was conducted, resulting in 71 selected relevant papers ranging from 2001 to 2013. Besides, a survey has been performed from June 2013 to October 2013, including 49 participants. Results: Seven Agile methodologies and six different SPI maturity models were identified and discussed. Furthermore, the combination of eight different Agile methodologies and Software Process Improvement maturity models is presented, and as well as their benefits and drawbacks that could emerge in Small and Medium – sized firms. Conclusion: The majority of the Agile methodologies and SPI maturity models are addressed to large or very large enterprises. Thus, little research has been conducted for SMEs. The combinations of the Agile methodologies and SPI maturity models are usually performed in experimental stages. However, it has been observed that such type of combination could present numerous benefits, which can also be applicable in SMEs as well. The combinations that are most common are the CMMI and XP, CMMI and Scrum, CMMI and Six Sigma, and the PRINCE2 and DSDM.

  • 249.
    Krishna Chaitanya, Konduru
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Scalability Drivers in Requirements Engineering2016Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
  • 250.
    Krusche, Stephan
    et al.
    Technische Universität München, DEU.
    Seitz, Andreas
    Technische Universität München, DEU.
    Börstler, Jürgen
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Bruegge, Bernd
    Technische Universität München, DEU.
    Interactive Learning: Increasing Student Participation through Shorter Exercise Cycles2017In: ACM International Conference Proceeding Series Volume Part F126225, ACM Digital Library, 2017, p. 17-26Conference paper (Refereed)
    Abstract [en]

    In large classes, there is typically a clear separation between content delivery in lectures on the one hand and content deepening in practical exercises on the other hand. This temporal and spatial separation has several disadvantages. In particular, it separates students’ hearing about a new concept from being able to actually practicing and applying it, which may decrease knowledge retention.

    To closely integrate lectures and practical exercises, we propose an approach which we call interactive learning: it is based on active, computer based and experiential learning, includes immediate feedback and learning from the reflection on experience. It decreases the time between content delivery and content deepening to a few minutes and allows for flexible and more efficient learning. Shorter exercise cycles allow students to apply and practice multiple concepts per teaching unit directly after they first heard about them.

    We applied interactive learning in two large software engineering classes with 300 students each and evaluated its use qualitatively and quantitatively. The students’ participation increases compared to traditional classes: until the end ofthe course, around 50% of the students attend class and participate in exercises. Our evaluations show that students’ learning experience and exam grades correlate with the increased participation. While educators need more time to prepare the class and the exercises, they need less time to review exercise submissions. The overall teaching effort for instructors and teaching assistants does not increase.

2345678 201 - 250 of 543
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf