Endre søk
Begrens søket
78910111213 451 - 500 of 644
RefereraExporteraLink til resultatlisten
Permanent link
Referera
Referensformat
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Annet format
Fler format
Språk
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Annet språk
Fler språk
Utmatningsformat
  • html
  • text
  • asciidoc
  • rtf
Treff pr side
  • 5
  • 10
  • 20
  • 50
  • 100
  • 250
Sortering
  • Standard (Relevans)
  • Forfatter A-Ø
  • Forfatter Ø-A
  • Tittel A-Ø
  • Tittel Ø-A
  • Type publikasjon A-Ø
  • Type publikasjon Ø-A
  • Eldste først
  • Nyeste først
  • Skapad (Eldste først)
  • Skapad (Nyeste først)
  • Senast uppdaterad (Eldste først)
  • Senast uppdaterad (Nyeste først)
  • Disputationsdatum (tidligste først)
  • Disputationsdatum (siste først)
  • Standard (Relevans)
  • Forfatter A-Ø
  • Forfatter Ø-A
  • Tittel A-Ø
  • Tittel Ø-A
  • Type publikasjon A-Ø
  • Type publikasjon Ø-A
  • Eldste først
  • Nyeste først
  • Skapad (Eldste først)
  • Skapad (Nyeste først)
  • Senast uppdaterad (Eldste først)
  • Senast uppdaterad (Nyeste først)
  • Disputationsdatum (tidligste først)
  • Disputationsdatum (siste først)
Merk
Maxantalet träffar du kan exportera från sökgränssnittet är 250. Vid större uttag använd dig av utsökningar.
  • 451.
    Pernstal, J.
    et al.
    Volvo Cars, SWE.
    Feldt, Robert
    Chalmers, swe.
    Gorschek, Tony
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för programvaruteknik. Blekinge Inst Technol, Software Engn, SE-37179 Karlskrona, Sweden..
    Floren, D.
    Volvo Cars, SWE.
    FLEX-RCA: a lean-based method for root cause analysis in software process improvement2019Inngår i: Software quality journal, ISSN 0963-9314, E-ISSN 1573-1367, Vol. 27, nr 1, s. 389-428Artikkel i tidsskrift (Fagfellevurdert)
    Abstract [en]

    Software process improvement (SPI) is an instrument to increase the productivity of, and the quality of work, in software organizations. However, a majority of SPI frameworks are too extensive or provide guidance and potential improvement areas at a high level, indicating only the symptoms, not the causes. Motivated by the industrial need of two Swedish automotive companies to systematically uncover the underlying root causes of high-level improvement issues identified in an SPI project-assessing inter-departmental interactions in large-scale software systems development-this paper advances a root cause analysis (RCA) method building on Lean Six Sigma, called Flex-RCA. Flex-RCA is used to delve deeper into challenges identified to find root causes as a part of the evaluation and subsequent improvement activities. We also demonstrate and evaluate Flex-RCA's industrial applicability in a case study. An overall conclusion is that the use of Flex-RCA was successful, showing that it had the desired effect of both producing a broad base of causes on a high level and, more importantly, enabling an exploration of the underlying root causes.

  • 452.
    Pernstål, Joakim
    et al.
    Volvo Car Corporation, SWE.
    Feldt, Robert
    Chalmers, SWE.
    Gorschek, Tony
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för programvaruteknik.
    Florén, Dan
    Volvo Car Corporation, SWE.
    Communication Problems in Software Development: A Model and Its Industrial Application2019Inngår i: International journal of software engineering and knowledge engineering, ISSN 0218-1940, Vol. 29, nr 10, s. 1497-1538Artikkel i tidsskrift (Fagfellevurdert)
    Abstract [en]

    Attaining effective communication within and across organizational units is among the most critical challenges for success in software development organizations. This paper presents a novel model, supporting analysis of problems in inter-departmental communication events. The model was developed and designed based on industrial needs emphasizing flexibility, applicability and scalability. The model covers central communication aspects in order to provide a useful approximation of communication problems rather than in-depth modeling on message-by message basis. Other event-specific information, such as costs, can then be attached to enrich analysis and understanding. To exemplify and evaluate the model and collect feedback from industry, it was applied to 16 events at a Swedish automotive manufacturer where communication between two departments had broken down during development of software-intensive systems. The evaluation showed that the model helped structure and conduct systematic data collection and analysis of dysfunctional communication patterns. We found that insufficient understanding of the matters being communicated was prevalent, but also more specifically, requirements were insufficiently balanced, detailed and specified over the full system development cycle. Besides, the long-term cost for the company was analyzed in depth for each event, yielding a total estimated cost for the analyzed communication events of 11.2MUS$. © 2019 World Scientific Publishing Company.

  • 453.
    Pernstål, Joakim
    et al.
    Volvo Car Corp, SE-40531 Gothenburg, Sweden..
    Gorschek, Tony
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för programvaruteknik.
    Feldt, Robert
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för programvaruteknik.
    Florén, Dan
    Volvo Car Corp, SE-40531 Gothenburg, Sweden..
    Requirements communication and balancing in large-scale software-intensive product development2015Inngår i: Information and Software Technology, ISSN 0950-5849, E-ISSN 1873-6025, Vol. 67, s. 44-64Artikkel i tidsskrift (Fagfellevurdert)
    Abstract [en]

    Context: Several industries developing products on a large-scale are facing major challenges as their products are becoming more and more software-intensive. Whereas software was once considered a detail to be bundled, it has since become an intricate and interdependent part of most products. The advancement of software increases the uncertainty and the interdependencies between development tasks and artifacts. A key success factor is good requirements engineering (RE), and in particular, the challenges of effectively and efficiently coordinating and communicating requirements. Objective: In this work we present a lightweight RE framework and demonstrate and evaluate its industrial applicability in response to the needs of a Swedish automotive company for improving specific problems in inter-departmental requirements coordination and communication in large-scale development of software-intensive systems. Method: A case study approach and a dynamic validation were used to develop and evaluate the framework in close collaboration with our industrial partner, involving three real-life cases in an ongoing car project. Experience and feedback were collected through observations when applying the framework and from 10 senior industry professionals in a questionnaire and in-depth follow-up interviews. Results: The experience and feedback about using the framework revealed that it is relevant and applicable for the industry as well as a useful and efficient way to resolve real problems in coordinating and communicating requirements identified at the case company. However, other concerns, such as accessibility to necessary resources and competences in the early development phases, were identified when using the method, which allowed for earlier pre-emptive action to be taken. Conclusion: Overall, the experience from using the framework and the positive feedback from industry professionals indicated a feasible framework that is applicable in the industry for improving problems related to coordination and communication of requirements. Based on the promising results, our industrial partner has decided upon further validations of the framework in a large-scale pilot program. (C) 2015 Elsevier B.V. All rights reserved.

  • 454.
    Petersen, Kai
    et al.
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för programvaruteknik.
    Badampudi, Deepika
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för programvaruteknik.
    Ali Shah, Syed Muhammad
    SICS Swedish ICT AB, SWE.
    Wnuk, Krzysztof
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för programvaruteknik.
    Gorschek, Tony
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för programvaruteknik.
    Papatheocharous, Efi
    SICS Swedish ICT AB, SWE.
    Axelsson, Jakob
    SICS Swedish ICT AB, SWE.
    Sentilles, Séverine
    Mälardalens högskola, SWE.
    Crnkovic, Ivica
    Chalmers, Göteborg, SWE.
    Cicchetti, Antonio
    Mälardalens högskola, SWE.
    Choosing Component Origins for Software Intensive Systems In-house, COTS, OSS or Outsourcing?: A Case Survey2018Inngår i: IEEE Transactions on Software Engineering, ISSN 0098-5589, E-ISSN 1939-3520, Vol. 39, nr 12, s. 237-261Artikkel i tidsskrift (Fagfellevurdert)
    Abstract [en]

    The choice of which software component to use influences the success of a software system. Only a few empirical studies investigate how the choice of components is conducted in industrial practice. This is important to understand to tailor research solutions to the needs of the industry. Existing studies focus on the choice for off-the-shelf (OTS) components. It is, however, also important to understand the implications of the choice of alternative component sourcing options (CSOs), such as outsourcing versus the use of OTS. Previous research has shown that the choice has major implications on the development process as well as on the ability to evolve the system. The objective of this study is to explore how decision making took place in industry to choose among CSOs. Overall, 22 industrial cases have been studied through a case survey. The results show that the solutions specifically for CSO decisions are deterministic and based on optimization approaches. The non-deterministic solutions proposed for architectural group decision making appear to suit the CSO decision making in industry better. Interestingly, the final decision was perceived negatively in nine cases and positively in seven cases, while in the remaining cases it was perceived as neither positive nor negative.

  • 455.
    Petersen, Kai
    et al.
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för programvaruteknik.
    Engström, Emelie
    Finding relevant research solutions for practical problems: the serp taxonomy architecture2014Konferansepaper (Fagfellevurdert)
    Abstract [en]

    Background: Experience and research indicates that there exist a communication gap between research and industry in software engineering. Objective: We propose the Software Engineering Research and Practice (SERP) taxonomy architecture to support communication between practitioners and researchers. The taxonomy architecture provides a basis for classifying research from a problem perspective which in turn supports the breaking down of complex practical challenges to researchable units. Thus such taxonomy may support the mapping of challenges in industry to research solutions in the software engineering context. Method: In this paper we present SERP and exemplifies its usage based on two literature studies in the field of software engineering. Further, we discuss how a taxonomy based on this architecture could have helped us in two past research projects that were conducted in close collaboration with industry. Finally we validate SERP by applying it to the area of software testing, developing SERP-test, and interviewing two industry practitioners and two researchers. Results: The taxonomy architecture has been applied to two problems in software testing, and has been assessed through interviews with practitioners and researchers. The interviews provided suggestions of how to improve the taxonomy architecture, which have been incorporated. With two examples, we demonstrated how the taxonomy architecture could be used to find solutions for industrial problems, and to find the problems addressed by a particular solution. Conclusion: SERP may be useful in multiple ways: (1) Given that SERP taxonomies are populated with industrial problems and scientific solutions, we could rapidly identify candidate research solutions for industrial practice. (2) Researchers could benefit from the taxonomy in the reporting of their research to ease the mapping to industrial challenges.

  • 456.
    Petersen, Kai
    et al.
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för programvaruteknik.
    Gencel, Cigdem
    Asghari, Negin
    Baca, Dejan
    Betz, Stefanie
    Action research as a model for industry-academia collaboration in the software engineering context2014Konferansepaper (Fagfellevurdert)
    Abstract [en]

    Background: Action research is a well-established research methodology. It is following a post-positivist research philosophy grounded in critical thinking. The methodology is driven by practical problems, emphasis participatory research, and develops practically useful solutions in an iterative manner. Objective: Two objectives are to be achieved: (1) Understanding the state of the art with respect to action research usage in the software engineering literature, and (2) reflecting and providing recommendations of how to foster industry-academia collaboration through action research. Method:} Based on our experience with two action research studies in close collaboration with Ericsson lessons learned and guidelines are presented. Results: In both cases presented action research led to multiple refinements in the interventions implemented. Furthermore, the close collaboration and co-production with the industry was essential to identify and describe the required refinements to provide an in-depth understanding. In comparison with previous studies, we required multiple iterations while previous software engineering studies reported mostly one iteration, or were not explicit regarding the number of iterations studied. Conclusion: We conclude that action research is a powerful tool for industry-academia collaboration. The success of the method highly depends on the researchers and practitioners working in a team. Future studies need to improve the reporting with respect to describing the type of action research used, the iterations, the model of collaboration, and the rationales for changes in each iteration.

  • 457.
    Petersen, Kai
    et al.
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för programvaruteknik.
    Gencel, Cigdem
    Asghari, Negin
    Betz, Stefanie
    An Elicitation Instrument for Operationalising GQM+Strategies (GQM+S-EI)2015Inngår i: Journal of Empirical Software Engineering, ISSN 1382-3256, E-ISSN 1573-7616, Vol. 20, nr 4, s. 968-1005Artikkel i tidsskrift (Fagfellevurdert)
    Abstract [en]

    Context: A recent approach for measurement program planning, GQM+Strategies, provides an important extension to existing approaches linking measurements and improvement activities to strategic goals and ways to achieve these goals. There is a need for instruments aiding in eliciting information from stakeholders to use GQM+Strategies. The success of GQM+Strategies highly depends on accurately identifying goals, strategies and information needs from stakeholders. Objective: The research aims at providing an instrument (called GQM+SEI), aiding practitioners to accurately elicit information needed by GQM+Strategies (capturing goals, strategies and information needs). Method: The research included two phases. In the first phase, using action research method, the GQM+S-EI was designed in three iterations in Ericsson AB. Thereafter, a case study was conducted to evaluate whether the information elicited with the designed instrument following the defined process was accurate and complete. Results: We identified that the industry requires elicitation instruments that are capable to elicit information from stakeholders, not having to know about the concepts (e.g. goals and strategies). The case study results showed that our proposed instrument is capable of accurately and completely capturing the needed information from the stakeholders. Conclusions: We conclude that GQM+S-EI can be used for accurately and completely eliciting the information needed by goal driven measurement frameworks. The instrument has been successfully transferred to Ericsson AB for measurement program planning.

  • 458.
    Petersen, Kai
    et al.
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för programvaruteknik.
    Khurum, Mahvish
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för programvaruteknik.
    Angelis, Lefteris
    Reasons for bottlenecks in very large-scale system of systems development2014Inngår i: Information and Software Technology, ISSN 0950-5849, E-ISSN 1873-6025, Vol. 56, nr 10, s. 1403-1420Artikkel i tidsskrift (Fagfellevurdert)
    Abstract [en]

    Context: System of systems (SOS) is a set or arrangement of systems that results when independent and useful systems are to be incorporated into a larger system that delivers unique capabilities. Our investigation showed that the development life cycle (i.e. the activities transforming requirements into design, code, test cases, and releases) in SoS is more prone to bottlenecks in comparison to single systems. Objective: The objective of the research is to identify reasons for bottlenecks in SoS, prioritize their significance according to their effect on bottlenecks, and compare them with respect to different roles and different perspectives, i.e. SoS view (concerned with integration of systems), and systems view (concerned with system development and delivery). Method: The research method used is a case study at Ericsson AB. Results: Results show that the most significant reasons for bottlenecks are related to requirements engineering. All the different roles agree on the significance of requirements related factors. However, there are also disagreements between the roles, in particular with respect to quality related reasons. Quality related hinders are primarily observed and highly prioritized by quality assurance responsibles. Furthermore, SoS view and system view perceive different hinders, and prioritize them differently. Conclusion: We conclude that solutions for requirements engineering in SoS context are needed, quality awareness in the organization has to be achieved end to end, and views between SoS and system view need to be aligned to avoid sub optimization in improvements.

  • 459.
    Petersen, Kai
    et al.
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för programvaruteknik.
    Nauman Bin, Ali
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för programvaruteknik.
    Operationalizing the requirements selection process with study selection procedures from systematic literature reviews2015Inngår i: CEUR Workshop Proceedings, 2015, Vol. 1342, s. 102-113Konferansepaper (Fagfellevurdert)
    Abstract [en]

    Context: Software organizations working in a market-driven environment have to select requirements from a large pool to be prioritized and put into backlogs for the development organization. Objective: This paper proposes an approach based on study selection in systematic literature reviews and translates the concept to requirements engineering. The rational for doing so is that the selection processes used there have been e?ective (selecting and finding relevant papers) and efficient (possible to use for a high number of studies, in some cases 10,000 research contributions had to be evaluated). Method: This paper can be classified as a solution proposal, and utilizes hypothetical examples to explain and argue for the method design decisions. Results: The process proposed consists of three main phases, namely establish selection criteria, evaluate selection criteria, and apply selection. On a more fine-grained level, nine activities are specified. Conclusion: Given that the process has been e?ective and efficient in a similar context, our proposition to be evaluated in future research contributions is that the process leads to e?ective and efficient decision making in requirements selection. © 2015 by the authors.

  • 460.
    Petersen, Kai
    et al.
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för programvaruteknik.
    Roos, Peter
    Nyström, Staffan
    Runeson, Per
    Early identification of bottlenecks in very large scale system of systems software development2014Inngår i: Journal of Software Maintenance and Evolution: Research and Practice, ISSN 1532-060X, E-ISSN 1532-0618, Vol. 26, nr 12, s. 1150-1171Artikkel i tidsskrift (Fagfellevurdert)
    Abstract [en]

    System of systems are of high complexity, and for each system, many different requirements are implemented in parallel. Systems are developed with some degree of managerial independence but later on have to work together. In this situation, many requirements are written, implemented, and tested in parallel for different systems that are to be integrated. This makes identifying bottlenecks challenging, and visualizations often used on project level (such as Kanban boards or burndown charts) have to be extended/complemented to cope with the increased complexity. In response to these challenges, the contributions of this study are to propose the following: (i) a visualization for early identification and proactive removal of bottlenecks; (ii) a visualization to check on the success of bottleneck resolution; and (iii) to provide an industry evaluation of the visualizations in a case study of a system of systems developed at Ericsson AB in Sweden. The feedback by the practitioners showed that the visualizations were perceived as useful in improving throughput and lead time. The quantitative analysis showed that the visualizations were able in identifying bottlenecks and showing improvements or the lack thereof. On the basis of the qualitative and quantitative data collected, we conclude that the visualizations are useful in bottleneck identification and resolution.

  • 461.
    Petersen, Kai
    et al.
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för programvaruteknik.
    Vakkalanka, Sairam
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för programvaruteknik.
    Kuzniarz, Ludwik
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för programvaruteknik.
    Guidelines for conducting systematic mapping studies in software engineering: An update2015Inngår i: Information and Software Technology, ISSN 0950-5849, E-ISSN 1873-6025, Vol. 64, s. 1-18Artikkel i tidsskrift (Fagfellevurdert)
    Abstract [en]

    Context Systematic mapping studies are used to structure a research area, while systematic reviews are focused on gathering and synthesizing evidence. The most recent guidelines for systematic mapping are from 2008. Since that time, many suggestions have been made of how to improve systematic literature reviews (SLRs). There is a need to evaluate how researchers conduct the process of systematic mapping and identify how the guidelines should be updated based on the lessons learned from the existing systematic maps and SLR guidelines. Objective To identify how the systematic mapping process is conducted (including search, study selection, analysis and presentation of data, etc.); to identify improvement potentials in conducting the systematic mapping process and updating the guidelines accordingly. Method We conducted a systematic mapping study of systematic maps, considering some practices of systematic review guidelines as well (in particular in relation to defining the search and to conduct a quality assessment). Results In a large number of studies multiple guidelines are used and combined, which leads to different ways in conducting mapping studies. The reason for combining guidelines was that they differed in the recommendations given. Conclusion The most frequently followed guidelines are not sufficient alone. Hence, there was a need to provide an update of how to conduct systematic mapping studies. New guidelines have been proposed consolidating existing findings. © 2015 Elsevier B.V.

  • 462.
    Pettersson, Richard
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för programvaruteknik.
    Efficiency of Different Encoding Schemes in Swarm Intelligence for Solving Discrete Assignment Problems: A Comparative Study2019Independent thesis Basic level (degree of Bachelor), 10 poäng / 15 hpOppgave
    Abstract [en]

    Background

    Solving problems classified as either NP-complete or NP-hard has long been an active topic in the research community, and has brought about many new algorithms for approximating an optimal solution (basically the best possible solution). A fundamental aspect to consider when developing such an algorithm is how to represent the given solution. Finding the right encoding scheme is key for any algorithm to function as efficiently as possible. That being said, there appears to be a lack of research studies that offer a comprehensive comparison of these encoding schemes.

    Objectives

    This study sets out to provide an extensive comparative analysis of five already existing encoding schemes for a population-based meta-heuristic algorithm, with focus on two discrete combinatorial problems: the 1/0 knapsack problem and task scheduling problem. The most popular scheme of these will also be defined and determined by reviewing the literature.

    Methods

    The encoding schemes were implemented and incorporated into a recently proposed algorithm, known as the Coyote Optimization Algorithm. Their difference in performance were then compared through several experiments.On top of that, the popularity of said schemes were measured by their number of occurrences among a set of surveyed research studies (on the topic knapsack-problem).

    Conclusions

    When compared to the real-valued encoding scheme, we found that both qubits (smallest unit in quantum computing) and complex numbers were more efficient for solving the 1/0 knapsack problem, due to their broader search-space.Our chosen variant of the quantum-inspired encoding scheme contributed to a slightly better result than its complex-valued counterpart. The binary- and boolean encoding schemes worked great in conjunction with a repair function for the knapsack problem, to the extent that their produced solutions converged at a faster rate than the rest.Interestingly enough, the real-valued encoding scheme was by far the more popular choice of them all (as far as the knapsack problem is concerned), which we attribute to its generally simple and convenient implementation; and the fact that it has been around for longer. Finally, we saw that the matrix-based encoding scheme contributed to a faster convergence rate for approximate solutions to the task scheduling problem when the hardware for each resource differed greatly in computing capacity. On the other hand, the SPV (small position value) decoder for both the real-valued and complex-valued encoding schemes were more advantageous when the resources had near to identical computing power, as it is more suitable for distributing tasks equally.

  • 463.
    Polavarapu, Sharen
    et al.
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för programvaruteknik.
    Jami, Amulya Sagarwal
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för programvaruteknik.
    A Framework to Integrate Software Process Improvements in Agile Software Development2016Independent thesis Advanced level (degree of Master (Two Years)), 20 poäng / 30 hpOppgave
    Abstract [en]

    Context: There has been a substantial growth of organizations adoptingAgile software development methodologies for various reasons. The requirementof improving the software processes with respect to traditional softwaredevelopment was clear and evident for different reasons. But the need forSoftware Process Improvements (SPI) in Agile context is unclear and thechallenges faced during the implementation of SPI in Agile software developmentare quite ambiguous. These two issues lie as a motivation for theobjectives of our study. Agile being a flexible way of software development,having a non-flexible framework is almost incompatible for implementingSPI in Agile software development. This acts as an inducement for buildingup our final objective.

    Objectives: The objectives of this research is to identify the need of Agile-SPI in software industry, challenges faced in implementing Agile-SPI atorganizational level and at team level and finally propose an approach forimplementing Agile-SPI based on improving practices.

    Methods: In order to achieve the objectives of our research, we initiallycarried out a survey, later cross verified and validated the data obtained inthe surveys through interviews. Literature review was performed to gainknowledge over the background and related work.

    Results: A total of 34 responses were obtained through survey. Theseresponses obtained through survey are further cross verified and validatedthrough 9 interviews. The data obtained through survey has been analyzedthrough descriptive statistics and the data obtained through interviews wasanalyzed using thematic coding.

    Conclusions: The need of Agile-SPI and the challenges faced by the organizationsand teams while implementing SPI in Agile software developmentwere identified. A total of 16 needs of Agile-SPI, 30 challenges faced byorganization and 37 challenges faced by team were drawn from survey andinterviews conducted. Finally, a conceptual framework has been proposedto implement SPI in Agile environment based on improving practices.

  • 464.
    Polepalle, Chahna
    et al.
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för programvaruteknik.
    Kondoju, Ravi Shankar
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för programvaruteknik.
    Evidence and perceptions on GUI test automation: An explorative Multi-Case study2017Independent thesis Advanced level (degree of Master (Two Years)), 20 poäng / 30 hpOppgave
    Abstract [en]

    Context. GUI-based automation testing is a costly and tedious activity in practice. As GUIs are well-known for being modified and redesigned throughout the development process, the corresponding test scripts are not valid anymore thereby being a hindrance to automation. Hence, substantial effort is invested in maintaining GUI test scripts which often leads to rework or waste due to improper decisions. As a result, practitioners have identified the need for decision support regarding when should GUI automation testing begin and how to make it easier and also identify what are the factors leading to waste in GUI-based automation testing. The current literature provides solutions relating to automation in general and few answers for GUI based-automation testing. Such generic answers might not be applicable to GUI test automation and also industries new to GUI development and testing. Thus, it is necessary to validate if the general solutions are applicable to GUI test automation and find additional answers that are not identified previously from practitioners opinions in an industrial context.

    Objectives. Capture relevant information regarding the current approach for GUI test automation within the subsystems from a case company. Next, identify the criteria for when to begin automation, testability requirements and factors associated with waste from literature and practice.

    Methods. We conducted a multiple-case study to explore opinions of practitioners in two subsystems at a Swedish telecommunication industry implementing GUI-automation testing. We conducted a literature review to identify answers from scientific literature prior to performing a case study.A two-phased interview was performed with different employees to collect their subjective opinions and also gather their opinions on the evidence collected from the literature. Later, Bayesian synthesis method was used to combine subjective opinions of practitioners with research-based evidence to produce context-specific results.

    Results. We identified 12 criteria for when to begin automation, 16 testability requirements and 15 factors associated with waste in GUI test automation.Each of them is classified into categories namely SUT-related,test-process related, test-tool related, human and organizational, environment and cross-cutting. New answers which were not present in the existing literature in the domain of the research are found.

    Conclusions. On validating the answers found in literature, it was revealed that the answers applicable for software test automation, in general, are valid for GUI automation testing as well. Since we incorporated subjective opinions to produce context specific results, we gained an understanding that every practitioner has their own way of working. Hence, this study aids in developing a common understanding to support informed subjective decisions based on evidence.

  • 465.
    Poulding, Simon
    et al.
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för programvaruteknik.
    Alexander, R.
    Clark, J. A.
    Hadley, M. J.
    The optimisation of stochastic grammars to enable cost-effective probabilistic structural testing2015Inngår i: Journal of Systems and Software, ISSN 0164-1212, E-ISSN 1873-1228, Vol. 103, s. 296-310Artikkel i tidsskrift (Fagfellevurdert)
    Abstract [en]

    The effectiveness of statistical testing, a probabilistic structural testing strategy, depends on the characteristics of the probability distribution from which test inputs are sampled. Metaheuristic search has been shown to be a practical method of optimising the characteristics of such distributions. However, the applicability of the existing search-based algorithm is limited by the requirement that the software's inputs must be a fixed number of ordinal values. In this paper we propose a new algorithm that relaxes this limitation and so permits the derivation of probability distributions for a much wider range of software. The representation used by the new algorithm is based on a stochastic grammar supplemented with two novel features: conditional production weights and the dynamic partitioning of ordinal ranges. We demonstrate empirically that a search algorithm using this representation can optimise probability distributions over complex input domains and thereby enable costeffective statistical testing, and that the use of both conditional production weights and dynamic partitioning can be beneficial to the search process. (C) 2014 Elsevier Inc. All rights reserved.

  • 466.
    Poulding, Simon
    et al.
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för programvaruteknik.
    Feldt, Robert
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för programvaruteknik.
    Automated Random Testing in Multiple Dispatch Languages2017Inngår i: Proceedings - 10th IEEE International Conference on Software Testing, Verification and Validation, ICST 2017, IEEE Computer Society, 2017, s. 333-344Konferansepaper (Fagfellevurdert)
    Abstract [en]

    In programming languages that use multiple dispatch, a single function can have multiple implementations, each of which may specialise the function's operation. Which one of these implementations to execute is determined by the data types of all the arguments to the function. Effective testing of functions that use multiple dispatch therefore requires diverse test inputs in terms of the data types of the input's arguments as well as their values. In this paper we describe an approach for generating test inputs where both the values and types are chosen probabilistically. The approach uses reflection to automatically determine how to create inputs with the desired types, and dynamically updates the probability distribution from which types are sampled in order to improve both the test efficiency and efficacy. We evaluate the technique on 247 methods across 9 built-in functions of Julia, a technical computing language that applies multiple dispatch at runtime. In the process, we identify three real faults in these widely-used functions. © 2017 IEEE.

  • 467.
    Poulding, Simon
    et al.
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för programvaruteknik.
    Feldt, Robert
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för programvaruteknik.
    Generating Controllably Invalid and Atypical Inputs for Robustness Testing2017Inngår i: Proceedings - 10th IEEE International Conference on Software Testing, Verification and Validation Workshops, ICSTW 2017, Institute of Electrical and Electronics Engineers Inc. , 2017, s. 81-84Konferansepaper (Fagfellevurdert)
    Abstract [en]

    One form of robustness in a software system is its ability to handle, in an appropriate manner, inputs that are unexpected compared to those it would experience in normal operation. In this paper we investigate a generic approach to generating such unexpected test inputs by extending a framework that we have previously developed for the automated creation of complex and high-structured test data. The approach is applied to the generation of valid inputs that are atypical as well as inputs that are invalid. We demonstrate that our approach enables control of the 'degree' to which the test data is invalid or atypical, and show empirically that this can alter the extent to which the robustness of a software system is exercised during testing. © 2017 IEEE.

  • 468.
    Poulding, Simon
    et al.
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för programvaruteknik.
    Feldt, Robert
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för programvaruteknik.
    Generating Structured Test Data with Specific Properties using Nested Monte-Carlo Search2014Inngår i: GECCO'14: PROCEEDINGS OF THE 2014 GENETIC AND EVOLUTIONARY COMPUTATION CONFERENCE, Association for Computing Machinery (ACM), 2014, s. 1279-1286Konferansepaper (Fagfellevurdert)
    Abstract [en]

    Software acting on complex data structures can be challenging to test: it is difficult to generate diverse test data that satisfies structural constraints while simultaneously exhibiting properties, such as a particular size, that the test engineer believes will be effective in detecting faults. In our previous work we introduced GödelTest, a framework for generating such data structures using non-deterministic programs, and combined it with Differential Evolution to optimize the generation process. Monte-Carlo Tree Search (MCTS) is a search technique that has shown great success in playing games that can be represented as sequence of decisions. In this paper we apply Nested Monte-Carlo Search, a single-player variant of MCTS, to the sequence of decisions made by the generating programs used by GödelTest, and show that this combination can efficiently generate random data structures which exhibit the specific properties that the test engineer requires. We compare the results to Boltzmann sampling, an analytical approach to generating random combinatorial data structures.

  • 469.
    Poulding, Simon
    et al.
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för programvaruteknik.
    Feldt, Robert
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för programvaruteknik.
    Heuristic Model Checking Using a Monte-Carlo Tree Search Algorithm2015Inngår i: Proceedings of the 2015 Annual Conference on Genetic and Evolutionary Computation, ACM Digital Library, 2015, s. 1359-1366Konferansepaper (Fagfellevurdert)
  • 470.
    Poulding, Simon
    et al.
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för programvaruteknik.
    Feldt, Robert
    Re-using generators of complex test data2015Inngår i: 2015 IEEE 8th International Conference on Software Testing, Verification and Validation (ICST), IEEE Computer Society, 2015, s. Article number 7102605-Konferansepaper (Fagfellevurdert)
    Abstract [en]

    The efficiency of random testing can be improved by sampling test inputs using a generating program that incorporates knowledge about the types of input most likely to detect faults in the software-under-test (SUT). But when the input of the SUT is a complex data type - such as a domain-specific string, array, record, tree, or graph - creating such a generator may be time- consuming and may require the tester to have substantial prior experience of the domain. In this paper we propose the re-use of generators created for one SUT on other SUTs that take the same complex data type as input. The re-use of a generator in this way would have little overhead, and we hypothesise that the re-used generator will typically be as least as efficient as the most straightforward form of random testing: sampling test inputs from the uniform distribution. We investigate this proposal for two data types using five generators. We assess test efficiency against seven real-world SUTs, and in terms of both structural coverage and the detection of seeded faults. The results support the re-use of generators for complex data types, and suggest that if a library of generators is to be maintained for this purpose, it is possible to extend library generators to accommodate the specific testing requirements of newly-encountered SUTs. © 2015 IEEE.

  • 471.
    Poulding, Simon
    et al.
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för programvaruteknik.
    Petersen, Kai
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för programvaruteknik.
    Feldt, Robert
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för programvaruteknik.
    Garousi, Vahid
    Using Citation Behavior to Rethink Academic Impact in Software Engineering2015Inngår i: ACM-IEEE International Symposium on Empirical Software Engineering and Measurement, 2015, s. 140-43Konferansepaper (Fagfellevurdert)
    Abstract [en]

    Although citation counts are often considered a measure of academic impact, they are criticized for failing to evaluate impact as intended. In this paper we propose that software engineering citations may be classified according to how the citation is used by the author of the citing paper, and that through this classification of citation behaviour it is possible to achieve a more refined understanding of the cited paper’s impact. Our objective in this work is to conduct an initial evaluation using the citation behaviour taxonomy proposed by Bornmann and Daniel. We independently classified citations to ten highly-cited papers published at the International Symposium on Empirical Software Engineering and Measurement (ESEM). The degree to which classifications were consistent between researchers was analyzed in order to assess the clarity of Bornmann and Daniel’s taxonomy. We found poor to fair agreement between researchers even though the taxonomy was perceived as relatively easy to apply for the majority of citations. We were nevertheless able to identify clear differences in the profile of citation behaviors between the cited papers. We conclude that an improved taxonomy is required if classification is to be reliable, and that a degree of automation would improve reliability as well as reduce the time taken to make a classification.

  • 472.
    , PraveenShivakumar
    et al.
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för programvaruteknik.
    Vijapurapu, Krishna Kanth
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för programvaruteknik.
    Tacit Knowledge Preservation at Vendor Organizations in Offshore Outsourcing Software Development2014Independent thesis Advanced level (degree of Master (Two Years))Oppgave
    Abstract [en]

    Context. Tacit knowledge preservation (TKP) is a critical activity in outsourcing business since there is a high possibility of losing business if the personnel turnover rate is high. Objective: This study investigates TKP techniques from both knowledge management (KM) and software engineer (SE) perspectives followed by a discussion on the practicability of these techniques in software industries. The main aim of this research study is to provide a set of recommendations that assists preserving tacit knowledge in offshore outsourcing vendor organizations. Methods: This research combines a systematic literature review with an industrial survey. A systematic literature review (SLR) was employed to identify the TKP techniques in both KM and SE literature. Quasi-gold standard approach was employed as search strategy in SLR. Further, a survey was conducted with industrial practitioners working in offshore outsourcing software development (OOSD) to validate the findings from SLR and to know the additional TKP techniques. Results: A total of 51 TKP techniques were extracted from SLR and no additional techniques were identified from the survey. These 51 techniques were grouped and categorized into two subgroups namely Socialization and Externalization. A recommendation system and model was proposed to make the TKP process mandatory for every software project in an organization. Conclusions: The research provided a wide set of techniques for preserving tacit knowledge but the major contribution is from KM field whereas a little from SE field. The results of SLR and industrial survey revealed that though a sufficient amount of TKP techniques are available the practicability of these techniques in SE organizations is limited in nature. Therefore, we recommend a Software Engineers Rating (SER) system and model to make the TKP process mandatory in every software project that benefits the organization and also to an employee.

  • 473.
    Pulipaka, Avinash Arepaka Sravanthi
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för programvaruteknik.
    Outsourced Offshore Software Testing Challenges and Mitigations2014Independent thesis Advanced level (degree of Master (Two Years))Oppgave
    Abstract [en]

    Software development comprises of different phases like requirements, analysis, design coding and testing. In this contemporary world of software development, development of software in globalized scenarios is prevalent and prominent. As part of different globalized scenarios this thesis magnifies the scenario of software product transfer which deals with the testing of software in the offshore location.

  • 474.
    Raavi, Jaya Krishna
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för programvaruteknik.
    Usage of third party components in Heterogeneous systems: An empirical study2016Independent thesis Advanced level (degree of Master (Two Years)), 20 poäng / 30 hpOppgave
    Abstract [en]

    Context: The development of complex systems of systems leads to high development cost, uncontrollable software quality and low productivity. Thus Component-based software development was used to improve development effort and cost of the software. Heterogeneous systems are the system of systems that consist of functionally independent sub-systems with at least one sub-system exhibiting heterogeneity with respect to other systems. The context of this study is to investigate the usage of third party components in heterogeneous systems.

    Objectives. In this study an attempt was made to investigate the usage of third party components in heterogeneous systems in order to accomplish the following objectives:

    • Identify different types of third party components.
    • Identify challenges faced while integrating third-party components in heterogeneous systems.
    • Investigate the difference in test design of various third party components
    • Identify what the practitioners learn from various third party components

     

    Methods: We have conducted a systematic literature review by following Systematic literature review guidelines by Kitchenham to identify third party components used, challenges faced while integrating third-party components and test design techniques. Qualitative interviews were conducted in order to complement, supplement the finding from the SLR and further provide guidelines to the practitioners using third party components. The studies obtained from the SLR were analyzed in relation to the quality criteria using narrative analysis. The data obtained from interview results were analyzed using thematic analysis.

    Results: 31 primary studies were obtained from the systematic literature review (SLR).  3 types of third components, 12 challenges, 6 test design techniques were identified from SLR.  From the analysis of interviews, it was observed that a total of 21 challenges were identified which complemented the SLR results. In addition, from interview test design techniques used for testing of heterogeneous systems having third party components were investigated. Interviews have also provided 10 Recommendations for the practitioners using different types of third party components in the product development.

    Conclusions: To conclude, commercial of the shelf systems (COTS and Open software systems (OSS) were the third party components mainly used in heterogeneous systems rather than in-house software from the interview and SLR results. 21 challenges were identified from SLR and interview results. The test design for testing of heterogeneous systems having different third party components vary, Due to the non-availability of source code, dependencies of the subsystems and competence of the component. From the analysis of obtained results, the author has also proposed guidelines to the practitioners based on the type of third party components used for product development.

  • 475.
    Rahman, Md. Shoaib
    et al.
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för programvaruteknik.
    Das, Arijit
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för programvaruteknik.
    MITIGATION APPROACHES FOR COMMON ISSUES AND CHALLENGES WHEN USING SCRUM IN GLOBAL SOFTWARE DEVELOPMENT2015Independent thesis Advanced level (degree of Master (Two Years))Oppgave
    Abstract [en]

    Context. Distributed software development teams frequently faced several issues from communication, co-ordination and control aspects. Teams are facing these issues because between teams there is socio-cultural distance, geographical distance and temporal diatance. So, the purpose of the study is to find out the acts when distributed Scrum teams face the problems. Objectives. There are several numbers of common GSD challenges or issues exist; such as, face to face meetings difficult, increase co-ordination costs and difficult to convey vision & strategy so on. The purpose of this study was to search, additional frequently occurred Global Software Development (GSD) issues or challenges. As well as, to find out the mitigation strategies, those practices by the Scrum practitioners (distributed software environment) in the industry. Methods. In this study, systematic literature review and scientific interview with distributed Scrum practitioners were conducted for empirical validation. One of the purpose for interview was to get challenges & mitigations from distributed Scrum practitioners point of view; as well as, verifying the literature review’s outcomes. Basically, we have extended the Hossain, Babar et al.’s [1] literature review and followed the similar procedures. Research papers were selected from the following sources, such as, IEEEXplore, ACM Digital library, Google Scholar, Compendex EI, Wiley InterSciene, Elsevier Science Direct, AIS eLibrary, SpringerLink. In addition, interviews were conducted from the persons who have at least six months working experience in a distributed Scrum team. Moreover, to analyze the interviews thematic analysis method has been followed. Results. Three additional common GSD challenges and four new mitigation strategies are found. Among the additional issues, one of them is communication issues (i.e. lack of trust/teamness or interpersonal relationship) and rest of them are co-ordination issues (i.e. lack domain knowledge/ lack of visibility and skill difference and technical issues). However, additional mitigation strategies are synchronizing works, preparation meeting, training and work status monitoring. Finally, frequently faced GSD issues are mapped with mitigation strategies by basing on the results obtained from SLR and interviews. Conclusions. Finally, we have got three additional GSD issues (such as, lack of trust/ teamness/ interpersonal relationship, lack of visibility/ lack of knowledge and difference in skills & technical issues) with the existing twelve common communication, co-ordination and control issues. The mitigation techniques (such as, synchronized works hour, ICT mediated synchronous communication and visit so on) for the common GSD issues has been found out and validated by Scrum practitioners. Among the existing issues, several of them use new mitigation strategies, those were gotten from practitioners. Moreover, for the two existing control issues (i.e. management of project artifacts may be subject to delays; managers must adapt to local regulations) lessening or mitigation techniques have been addressed by interviewees. This study was carried out to get the common GSD issues & mitigations from literature and distributed Scrum practitioners.

  • 476.
    Rapp, Carl
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för programvaruteknik.
    Gamification as a tool to encourage eco-driving2016Independent thesis Advanced level (degree of Master (Two Years)), 20 poäng / 30 hpOppgave
    Abstract [en]

    Context: In this work a system, the eco service, is developed that incorporates elements from gamification to help drivers adapt to a more energy-efficient driving style. An energy-efficient driving style can help reduce fuel consumption, increase traffic safety and help reduce the emissions made from vehicles.

    Objectives: The main goal of this work is to explore ways of how gamification can be used in the context of eco-driving. Evaluating different elements and how they work in this context is important to help drivers to continue improving their driving style.

    Method: The eco service was tested on 16 participants where each participants was asked to drive a predetermined route. During the experiment the participants were given access to the eco service in order to gain feedback on their driving. Lastly interviews were held with each participant on questions regarding the use of gamification and how it can be improved in the context of eco-driving. The research was done in collaboration with a swedish company, Swedspot AB, that works with software solutions for connected vehicles.

    Results & Conclusions: Positive results were found on the use of gamification. Participants reported that the eco service made them more aware of their driving situation and how to improve. Game elements with positive influence were reward and competitive based and helped motivate the driver to improve.

  • 477.
    Razzak, Mohammad Abdur
    et al.
    Daffodil Int Univ, BGD.
    Šmite, Darja
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för programvaruteknik.
    Knowledge Management in Globally Distributed Agile Projects-Lesson Learned2015Inngår i: 2015 IEEE 10TH INTERNATIONAL CONFERENCE ON GLOBAL SOFTWARE ENGINEERING (ICGSE 2015), IEEE , 2015, s. 81-89Konferansepaper (Fagfellevurdert)
    Abstract [en]

    Knowledge management (KM) is essential for success in any software project, but especially in global software development where team members are separated by time and space. Software organizations are managing knowledge in various ways to increase transparency and improve software team performance. One way to classify these strategies is proposed by Earl who defined seven knowledge management schools. The objective of this research is to study knowledge creation and sharing practices in a number of distributed agile projects, map these practices to the knowledge management strategies and determine which strategies are most common, which are applied only locally and which are applied globally. This is done by conducting a series of semi-structured qualitative interviews over a period of time span during May, 2012-June, 2013. Our results suggest that knowledge sharing across remote locations in distributed agile projects heavily relies on knowledge codification, i.e. technocratic KM strategies, even when the same knowledge is shared tacitly within the same location, i.e. through behavioral KM strategies.

  • 478. Razzak, Mohammad Abdur
    et al.
    Šmite, Darja
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för programvaruteknik.
    Ahmed, Rajib
    Spatial knowledge creation and sharing activities in a distributed agile project2013Konferansepaper (Fagfellevurdert)
    Abstract [en]

    Knowledge management (KM) is key to the success of any software organization. KM in software development has been the center of attention for researchers due to its potential to improve productivity. However, the knowledge is not only stored in repositories but is also shared in the office space. Agile software development teams use the benefits of shared space to foster knowledge creation. But it is difficult to create and share this type of knowledge, when team members are distributed. This participatory single-case study indicates that, distributed team members rely heavily on knowledge codification and application of tools for knowledge sharing. We have found that, the studied project did not use any specific software or hardware that would enable spatial knowledge creation and sharing. Therefore parts of the knowledge items not codified were destined to be unavailable for remote team members.

  • 479.
    Reddy, Sri Sai Vijay Raj
    et al.
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för programvaruteknik.
    Nekkanti, Harini
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för programvaruteknik.
    Surveys in Software Engineering: A Systematic Literature Review and Interview Study2016Independent thesis Advanced level (degree of Master (Two Years)), 20 poäng / 30 hpOppgave
    Abstract [en]

    Context: The need for empirical investigations in software engineering domain is growing immensely. Many researchers nowadays, conduct and validate their study using empirical evidences. Survey is one such empirical investigation method which enables researchers to collect data from the large population. Main aim of the survey is to generalize the findings. Many problems are faced by the researchers in the survey process. Survey outcomes also depend upon variables like sample size, response rate and analysis techniques. Hence there is need for the literature addressing all the possible problems faced and also the impact of survey variables on outcomes.

    Objectives: Firstly, to identify the common problems faced by the researchers from the existing literature and also analyze the impact of the survey variables. Secondly to collect the experiences of software engineering researchers regarding the problems faced and the survey variables. Finally come up with a checklist of all the problems and mitigation strategies along with the information about the impact of survey variables.

    Methods: Initially a systematic literature review was conducted, to identify the existing problems in the literature and to know the effect of response rate, sample size, analysis techniques on survey outcomes. Then systematic literature review results were validated by conducting semi-structured, faceto-face interviews with software engineering researchers.

    Results: We were successful in providing a checklist of problems along with their mitigation strategies. The survey variables dependency on type of research, researcher’s choices limited us from further analyzing their impact on survey outcomes. The face-to-face interviews with software engineering researchers provided validations to our research results.

    Conclusions: This research gave us deeper insights into the survey methodology. It helped us to explore the differences that exists between the state of art and state of practice towards problem mitigation in survey process.

  • 480.
    Rehman, Zia ur
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för programvaruteknik.
    Overcoming Challenges of Requirements Elicitation in Offshore Software Development Projects2014Independent thesis Advanced level (degree of Master (Two Years))Oppgave
    Abstract [en]

    Context. Global Software Development (GSD) is the plan of action in which software development is performed under temporal, political, organizational and cultural boundaries. Offshore outsourced software development is the part of GSD, which refers to the transfer of certain software development activities to an external organization in another country. The primary factors driving offshore outsourced software development are low cost, access to a large pool of skilled laborers, increased productivity, high quality, market access and short development cycle. Requirements engineering (RE) and especially requirements elicitation is highly affected by the geographical distribution and multitude of stakeholders. Objectives. The goal of conducting this study is to explore the challenges and solutions associated with requirements elicitation phase during offshore software projects, both in research literature and in industrial practice. Moreover, this study examines that which of the challenges and practices reported in literature can be seen in industrial practice. This helped in finding out the similarities and differences between the state of art and state of practice. Methods. Data collection process has been done through systematic literature review (SLR) and web survey. SLR has been conducted using guidelines of Kitchenham and Charters. During SLR, The studies have been identified from the most reliable and authentic databases such as Compendex, Inspec (Engineering village) and Scopus. In the 2nd phase, survey has been conducted with 391 practitioners from various organizations involved in GSD projects. In the 3rd phase, qualitative comparative analysis has been applied as an analysis method. Results. In total 10 challenges and 45 solutions have been identified from SLR and survey. Through SLR, 8 challenges and 22 solutions have been identified. While through industrial survey, 2 additional challenges and 23 additional solutions have been identified. By analyzing the frequency of challenges, the most compelling challenges are communication, control and socio-cultural issues. Conclusions. The comparison between theory and practice explored the most compelling challenges and their associated solutions. It is concluded that socio-cultural awareness and proper communication between client and supplier organization’s personnel is paramount for successful requirements elicitation. The scarcity of research literature in this area suggests that more work needs to be done to explore some strategies to mitigate the impact of additional 2 challenges revealed through survey.

  • 481.
    Ren, Mingyu
    et al.
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för programvaruteknik.
    Dong, Zhipeng
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för programvaruteknik.
    What do we know about Testing practices in Software Startups?2017Independent thesis Advanced level (degree of Master (Two Years)), 20 poäng / 30 hpOppgave
    Abstract [en]

    Context. With the rapid development of the software industry, innovative software products become the mainstream of the software market. Because software startups can use a few resources to quickly produce and publish innovative software products, more and more software startups are launched. Software testing is important to ensure the quality of product in software companies. Software testing is costly in software development, but if software testing is avoided, it could be costlier. Many different regular software companies spend up to 40-50% of development efforts on software testing [1] [2]. Compared with other regular software companies, time and money are finite and need to be allocated reasonably in software startups. Unreasonable allocation of time and money could lead to the failure of software startups. We don’t know how much software startups spend for testing, and few research studies have investigated the testing practices in software startups. Therefore, we decided to conduct an exploratory study to know about the testing practices in software startups.

    Objectives. The aim of the research is to investigate testing practices in software startups. In this study, we investigate software startups’ structure and how to manage their test team. The test processes and test techniques used in software startups have been researched. And the main testing challenges in software startups have been investigated as well.

    Methods. We mainly conducted a qualitative research for the study. We selected literature review and survey as the research method. The literature review method is used to get in-depth understanding of software testing practices in software companies. Survey is used to answer our research questions. We used interview as our data collection method. And in order to analyze data from interviews, we selected descriptive statistics method.

    Results. A total of 13 responses were obtained through interviews from 9 software startups. We got results from 9 investigated software startups to structure and manage their test teams. We analyzed the common steps of test processes and classified the techniques they used in the 9 software startups. At last, we analyzed and listed the main testing challenges that are occurred in the 9 software startups. Conclusions. The research objectives are fulfilled. The research questions have been answered. We got the conclusion based on 9 software startups. The 9 companies cannot represent all software startups, but we can know about test practices in software startups initially through the 13 interviews. We also found some differences about testing practice between 9 software startups and regular software companies. Our study is a primary research to explore testing practices in 9 software startups, we provided some data and analysis results of the 9 companies to the researchers who want to research some related area. In addition, our research could help someone who plans to set up a software company. They can use the data we collected to think about the testing practice in their own company. Then find out the best way to prevent and resolve the problem in testing. 

  • 482.
    Rodriguez, Pilar
    et al.
    Oulun Yliopisto, FIN.
    Mendes, Emilia
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik. Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för programvaruteknik.
    Turhan, Buran
    Oulun Yliopisto, FIN.
    Key Stakeholders' Value Propositions for Feature Selection in Software-intensive Products: An Industrial Case Study2018Inngår i: IEEE Transactions on Software Engineering, ISSN 0098-5589, E-ISSN 1939-3520Artikkel i tidsskrift (Fagfellevurdert)
    Abstract [en]

    Numerous software companies are adopting value-based decision making. However, what does value mean for key stakeholders making decisions? How do different stakeholder groups understand value? Without an explicit understanding of what value means, decisions are subject to ambiguity and vagueness, which are likely to bias them. This case study provides an in-depth analysis of key stakeholders' value propositions when selecting features for a large telecommunications company's software-intensive product. Stakeholder' value propositions were elicited via interviews, which were analyzed using Grounded Theory coding techniques (open and selective coding). Thirty-six value propositions were identified and classified into six dimensions: customer value, market competitiveness, economic value/profitability, cost efficiency, technology & architecture, and company strategy. Our results show that although propositions in the customer value dimension were those mentioned the most, the concept of value for feature selection encompasses a wide range of value propositions. Moreover, stakeholder groups focused on different and complementary value dimensions, calling to the importance of involving all key stakeholders in the decision making process. Although our results are particularly relevant to companies similar to the one described herein, they aim to generate a learning process on value-based feature selection for practitioners and researchers in general. IEEE

  • 483.
    Ruan, Shaopeng
    et al.
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för programvaruteknik.
    Qi, Pengyang
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för programvaruteknik.
    The analysis of the different characteristics of commits between developers with different experience level: An archival study2019Independent thesis Advanced level (degree of Master (Two Years)), 20 poäng / 30 hpOppgave
    Abstract [en]

    Background: With the development of software, its quality is increasingly valued by people. The developing technical ability was absolutely taken to underpin the performance of the developer, and code quality was raised as being related to developer performance, thus code quality could be a measure of developer performance. Developer performance is often influenced by multiple factors. Also, different factors have different impacts on developer performance in different project types. It is important for us to understand the positive and negative factors for developer performance in a certain project. If these factors are valued, developers will have better performance and project will have higher quality.

     

    Objectives: The objective of our study is to identify how factors (developer experience, task size, and team size) impact the developer performance in each case. Though understanding how factors impact the developer performance, developers can have a better performance, which is a big benefit to the quality of project.

     

    Methods: We decided to use the characteristics of commits during the Gerrit code review to measure the committed code quality in our research, and from committed code quality we can measure  the developer performance. We selected two different projects which use Gerrit code review as our cases to conduct our archive study. One is the legacy project, another is the open-source project. Then we selected five common characteristics (the rate of abandoned application code, the rate of abandoned test code, abandoned lines of application code, abandoned lines of test code and build success rate) to measure the code quality The box plot is drawn to visualize the relationship between the factor experience and each characteristic of the commits. And Spearman rank correlation is used to test the correlation between each factor and characteristic of commits from the statistical perspective. Finally, we used the multiple linear regression to test how a specific characteristic of commits impacted by the multiple factors.

     

    Results: The results show that developers with high experience tend to abandon less proportion of their code and abandon less lines of code in the legacy project. Developers with high experience tend to abandon less proportion of their code in the open-source project. There is a similar pattern of the factor task size and the factor amount of code produced in these two cases. Bigger task or more amount of code produced will cause a great amount of code abandoned. Big team size will lead to a great amount of code abandoned in the legacy project.

     

    Conclusions: After we know about how factors (experience, task size, and team size) influence the developers' performance, we have listed two contributions that our research provided: 

    1. Big task size and big team size will bring negative impact to the developer performance. 

    2. Experienced developers usually have better performance than onboarded developers.

    According to these two contributions, we will give some suggestions to these two kinds of projects about how to improve developer performance, and how to assign the task reasonable.

  • 484.
    Sablis, Aivars
    et al.
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för programvaruteknik.
    Šmite, Darja
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för programvaruteknik.
    Moe, Nils Brede
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för programvaruteknik.
    Exploring cross-site networking in large-scale distributed projects2018Inngår i: Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), Springer Verlag , 2018, Vol. 11271, s. 318-333Konferansepaper (Fagfellevurdert)
    Abstract [en]

    Context: Networking in a distributed large-scale project is complex because of many reasons: time zone problems can make it challenging to reach remote contacts, teams rarely meet face-to-face which means that remote project members are often unfamiliar with each other, and applying activities for growing the network across sites is also challenging. At the same time, networking is one of the primary ways to share and receive knowledge and information important for developing software tasks and coordinating project activities. Objective: The purpose of this paper is to explore the actual networks of teams working in large-scale distributed software development projects and project characteristics that might impact their need for networking. Method: We conducted a multi-case study with three project cases in two companies, with software development teams as embedded units of analysis. We organized 20 individual interviews to characterize the development projects and surveyed 96 members from the total of 14 teams to draw the actual teams networks. Results: Our results show that teams in large-scale projects network in order to acquire knowledge from experts, and to coordinate tasks with other teams. We also learned that regardless of project characteristics, networking between sites in distributed projects is relatively low. Conclusions: Our study emphasizes the importance of networking. Therefore, we suggest that similar companies should pay extra attention for cultivating a networking culture in the large to strengthen their cross-site communication. © Springer Nature Switzerland AG 2018.

  • 485.
    Sadowska, Małgorzata
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för programvaruteknik.
    Quality of business models expressed in BPMN2013Independent thesis Advanced level (degree of Master (Two Years))Oppgave
    Abstract [en]

    Context. The quality of business process models is important in the area of model-based software development. The overall goal of this study was to develop and evaluate a model for assessing the quality of models (Process Diagrams) in Business Process Model and Notation (BPMN). The model was an instantiation of the developed metamodel that adopt ISO/IEC 1926. Objectives. The objectives of the thesis were to propose, implement and evaluate a model for quality assessment of business process models in BPMN. The model was intended to help practitioners to check the quality of their BPMN models and provide meaningful feedback on whether the business process models are of good or bad quality. First objective was to develop a metamodel of models for quality assessment of business process models in BPMN, and later the model that in an instantiation of the metamodel. Within the model, the objectives were to propose the relevant quality characteristics, quality metrics, quality criteria and quality functions. Finally, usefulness of model for quality assessment of business process models in BPMN was to be evaluated. Methods. The methodology was driven by essential elements of the model for quality assessment of business process models in BPMN. This is: quality characteristics, quality metrics, quality criteria and quality functions. In the beginning the metamodel of the model was developed based on the ISO/IEC 1926 standard. Later, in order to identify quality characteristics of models existing in the literature, a systematic literature review was conducted. Quality characteristics explicitly relevant to BPMN were compared against each other and selected. Overlapping quality characteristics relevant to BPMN were merged. Next, in order to obtain quality metrics that measure aspects of models of business processes, a literature review was carried out. The literature review was restricted by a proposed set of selection criteria. The criteria were questions that every relevant literature describing quality metrics must affirmatively answer in order to identify only metrics that were able to be assigned to identify quality characteristics. If the chosen quality metrics needed to be changed or adjusted for the sake of better results, the author added changes or adjustments and provided rationale for them. Next, in order to obtain quality criteria, values of the quality metrics were gathered through measuring a repository of BPMN models. The repository was gathered as a preparatory work for the thesis and it consisted of models of varying quality. Manual measurement of quality metrics for each BPMN model from the repository could not be done within a reasonable amount of time. Therefore, a tool to automatically calculate metrics for BPMN models was implemented. The quality criteria were proposed based on the results from interpretation of the values using statistical analysis. Later, quality functions that aggregate values of the metrics were proposed. The complete model was next integrated into the tool so that it could assess a quality of real BPMN models. Finally, the model for assessing the quality of business process models in BPMN was evaluated for usefulness through a survey and survey-based experiment. Results. A metamodel of models for quality assessment of business process models in BPMN was proposed. A model for the quality assessment of models in BPMN was proposed and evaluated for usefulness. Initial sets of quality characteristics of models were found in the literature and quality characteristics that were relevant to BPMN were extracted. Quality metrics that measure aspects of models were found and adjusted to the BPMN notation. Quality criteria that state how values of quality metrics can be classified as good or bad were provided. Quality functions that state if quality characteristics are good or bad for a chosen BPMN model were provided. Finally, a tool that implements the model for quality assessment of models in BPMN was created. Conclusions. The results of the survey and survey-based experiment showed that the proposed model for quality assessment of models in BPMN works in most cases and is needed in general. Additionally, the elements of the model which should be corrected were identified. Contacted users of BPMN expressed a will to use the suggested tool associated with the model for quality assessment of business process models in BPMN.

  • 486.
    Said Tahirshah, Farid
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för programvaruteknik.
    Comparison between Progressive Web App and Regular Web App2019Independent thesis Basic level (degree of Bachelor), 10 poäng / 15 hpOppgave
    Abstract [en]

    In 2015 the term Progressive Web Application was coined to describe applications that are getting the advantage of all Progressive App features. Some of the essential features are offline support, app-like interface, secure connection, etc. Since then, case studies from PWA’s implementation showed optimistic promises for improving web page performance, time spent on site, user engagement, etc. The goal of this report is to analyze some of the effects of PWA. This work will investigate the browser compatibility of PWA’s features, compare and analyze performance and memory consumption effect of PWA’s features compared to Regular WebApp. Results showed a lot of the features of PWA are still not sup-ported by some major browsers. Performance benchmark showed that required https connection for PWA is slowing down all of the PWA’s performance metrics on the first visit. On a repeat visit, some of the PWA features like speed index is outperforming the Regular Web App. Memory consumption on PWA increased more than 2 times the size of RWA. The conclusion is that even if some features are not directly supported by browsers, they still might have workaround solutions. PWA is slower than regular web app if https on your web server is not optimized. Different browsers have different memory limitations for PWA caches. You should implement https and PWA features only if you have HTTP/2 support on your web server, otherwise, performance can decrease.

  • 487.
    Salleh, Norsaremah
    et al.
    International Islamic University Malaysia, MYS.
    Mendes, Fabiana
    Oulun Yliopisto, FIN.
    Mendes, Emilia
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för programvaruteknik.
    A Systematic Mapping Study of Value-Based Software Engineering2019Inngår i: Proceedings - 45th Euromicro Conference on Software Engineering and Advanced Applications, SEAA 2019, Institute of Electrical and Electronics Engineers Inc. , 2019, s. 404-411Konferansepaper (Fagfellevurdert)
    Abstract [en]

    Integrating value-oriented perspectives into the principles and practices of software engineering is critical to ensure that software development and management activities address all key stakeholders' views and also balance short-and-long-term goals. This is put forward in the discipline of Value-Based Software Engineering (VBSE). In this paper, a mapping study of VBSE is detailed. We classify evidence on VBSE principles and practices, research methods, and the research types. This mapping study includes 134 studies located from online searches, and backward snowballing of references. Our results show that VB Requirements Engineering (22%) and VB Planning and Control (19%) were the two principles and practices mostly investigated in the VBSE literature, whereas VB Risk Management, VB People Management and Value Creation (3% respectively) were the three less researched. In terms of the research method, the most commonly employed method is case-study research. In terms of research types, most of the studies (28%) proposed solution technique(s) without empirical validation. © 2019 IEEE.

  • 488.
    Sandberg, Emil
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för programvaruteknik.
    Creative Coding on the Web in p5.js: A Library Where JavaScript Meets Processing2019Independent thesis Basic level (degree of Bachelor), 10 poäng / 15 hpOppgave
    Abstract [en]

    Creative coding is the practice of writing code primarily for an expressive purpose rather than a functional one. It is mostly used in creative arts contexts. One of the most popular tools in creative coding is Processing. Processing is a desktop application and in recent years a web-based alternative named p5.js has been developed.

    This thesis investigates the p5.js JavaScript library. It looks at what can be accomplished with it and in which cases it might be used. The main focus is on the pros and cons of using p5.js for web graphics. Another point of focus is on how the web can be used as a creative platform with tools like p5.js. The goals are to provide an overview of p5.js and an evaluation of the p5.js library as a tool for creating interactive graphics and animations on the web.

    The research focuses on comparing p5.js with plain JavaScript from usability and performance perspectives and making general comparisons with other web-based frameworks for creative coding. The methods are a survey and interviews with members of creative coding communities, as well as performing coding experiments in p5.js and plain JavaScript and comparing the results and the process.

    The results from the coding experiments show that compared to plain JavaScript p5.js is easier to get started with, it is more intuitive, and code created in p5.js is easier to read. On the other hand, p5.js performs worse, especially when continuously drawing large amounts of elements to the screen. This is further supported by the survey and the interviews, which show that p5.js is liked for its usability, but that its performance issues and lack of advanced features mean that it is usually not considered for professional projects. The primary use case for p5.js is creating quick, visual prototypes. At the same time, the interviews show that p5.js has been used in a variety of contexts, both creative and practical.

    p5.js is a good library for getting started with coding creatively in the browser and is an excellent choice for experimenting and creating prototypes quickly. Should project requirements be much more advanced than that, there might be other options that will work better.

  • 489.
    Santos, Rodrigo
    et al.
    Fed Univ State Rio de Janeiro, BRA.
    Teixeira, Eldanae
    Univ Fed Rio de Janeiro, BRA.
    Mendes, Emilia
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik. Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för programvaruteknik.
    McGregor, John
    Clemson Univ, USA.
    2nd Workshop on Social, Human, and Economic Aspects of Software (WASHES) Special Edition for Software Reuse2017Inngår i: MASTERING SCALE AND COMPLEXITY IN SOFTWARE REUSE (ICSR 2017) / [ed] Botterweck, G Werner, C, SPRINGER INTERNATIONAL PUBLISHING AG , 2017, s. 223-224Konferansepaper (Fagfellevurdert)
    Abstract [en]

    The Special Edition for Software Reuse of the Workshop on Social, Human, and Economic Aspects of Software (WASHES) aims at bringing together researchers and practitioners who are interested in social, human, and economic aspects of software. WASHES is a forum to discuss models, methods, techniques, and tools to achieve software quality, improve reuse and deal with the existing issues in this context. This special edition's main topic is "Challenges of Reuse and the Social, Human, and Economic Aspects of Software". We believe it is important to investigate software reuse beyond the technical perspective and understand how the non-technical barriers of reuse affect practices, processes and tools in practice.

  • 490.
    Santoso, Ario
    et al.
    University of Innsbruck, AUT.
    Felderer, Michael
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för programvaruteknik.
    Specification-driven predictive business process monitoring2019Inngår i: Software and Systems Modeling, ISSN 1619-1366, E-ISSN 1619-1374Artikkel i tidsskrift (Fagfellevurdert)
    Abstract [en]

    Predictive analysis in business process monitoring aims at forecasting the future information of a running business process. The prediction is typically made based on the model extracted from historical process execution logs (event logs). In practice, different business domains might require different kinds of predictions. Hence, it is important to have a means for properly specifying the desired prediction tasks, and a mechanism to deal with these various prediction tasks. Although there have been many studies in this area, they mostly focus on a specific prediction task. This work introduces a language for specifying the desired prediction tasks, and this language allows us to express various kinds of prediction tasks. This work also presents a mechanism for automatically creating the corresponding prediction model based on the given specification. Differently from previous studies, instead of focusing on a particular prediction task, we present an approach to deal with various prediction tasks based on the given specification of the desired prediction tasks. We also provide an implementation of the approach which is used to conduct experiments using real-life event logs. © 2019, The Author(s).

  • 491.
    Sathi, Veer Reddy
    et al.
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för programvaruteknik.
    Ramanujapura, Jai Simha
    A Quality Criteria Based Evaluation of Topic Models2016Independent thesis Advanced level (degree of Master (Two Years)), 20 poäng / 30 hpOppgave
    Abstract [en]

    Context. Software testing is the process, where a particular software product, or a system is executed, in order to find out the bugs, or issues which may otherwise degrade its performance. Software testing is usually done based on pre-defined test cases. A test case can be defined as a set of terms, or conditions that are used by the software testers to determine, if a particular system that is under test operates as it is supposed to or not. However, in numerous situations, test cases can be so many that executing each and every test case is practically impossible, as there may be many constraints. This causes the testers to prioritize the functions that are to be tested. This is where the ability of topic models can be exploited. Topic models are unsupervised machine learning algorithms that can explore large corpora of data, and classify them by identifying the hidden thematic structure in those corpora. Using topic models for test case prioritization can save a lot of time and resources.

    Objectives. In our study, we provide an overview of the amount of research that has been done in relation to topic models. We want to uncover various quality criteria, evaluation methods, and metrics that can be used to evaluate the topic models. Furthermore, we would also like to compare the performance of two topic models that are optimized for different quality criteria, on a particular interpretability task, and thereby determine the topic model that produces the best results for that task.

    Methods. A systematic mapping study was performed to gain an overview of the previous research that has been done on the evaluation of topic models. The mapping study focused on identifying quality criteria, evaluation methods, and metrics that have been used to evaluate topic models. The results of mapping study were then used to identify the most used quality criteria. The evaluation methods related to those criteria were then used to generate two optimized topic models. An experiment was conducted, where the topics generated from those two topic models were provided to a group of 20 subjects. The task was designed, so as to evaluate the interpretability of the generated topics. The performance of the two topic models was then compared by using the Precision, Recall, and F-measure.

    Results. Based on the results obtained from the mapping study, Latent Dirichlet Allocation (LDA) was found to be the most widely used topic model. Two LDA topic models were created, optimizing one for the quality criterion Generalizability (TG), and one for Interpretability (TI); using the Perplexity, and Point-wise Mutual Information (PMI) measures respectively. For the selected metrics, TI showed better performance, in Precision and F-measure, than TG. However, the performance of both TI and TG was comparable in case of Recall. The total run time of TI was also found to be significantly high than TG. The run time of TI was 46 hours, and 35 minutes, whereas for TG it was 3 hours, and 30 minutes.Conclusions. Looking at the F-measure, it can be concluded that the interpretability topic model (TI) performs better than the generalizability topic model (TG). However, while TI performed better in precision,

    Conclusions. Looking at the F-measure, it can be concluded that the interpretability topic model (TI) performs better than the generalizability topic model (TG). However, while TI performed better in precision, recall was comparable. Furthermore, the computational cost to create TI is significantly higher than for TG. Hence, we conclude that, the selection of the topic model optimization should be based on the aim of the task the model is used for. If the task requires high interpretability of the model, and precision is important, such as for the prioritization of test cases based on content, then TI would be the right choice, provided time is not a limiting factor. However, if the task aims at generating topics that provide a basic understanding of the concepts (i.e., interpretability is not a high priority), then TG is the most suitable choice; thus making it more suitable for time critical tasks.

  • 492.
    Sauerwein, Clemens
    et al.
    University of Innsbruck, AUT.
    Pekaric, Irdin
    University of Innsbruck, AUT.
    Felderer, Michael
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för programvaruteknik.
    Breu, Ruth
    University of Innsbruck, AUT.
    An Analysis and Classification of Public Information Security Data Sources used in Research and Practice2019Inngår i: Computers & security (Print), ISSN 0167-4048, E-ISSN 1872-6208, Vol. 82, s. 140-155Artikkel i tidsskrift (Fagfellevurdert)
    Abstract [en]

    In order to counteract today’s sophisticated and increasing number of cyber threats the timely acquisition of information regarding vulnerabilities, attacks, threats, countermeasures and risks is crucial. Therefore, employees tasked with information security risk management processes rely on a variety of information security data sources, ranging from inter-organizational threat intelligence sharing platforms to public information security data sources, such as mailing lists or expert blogs. However, research and practice lack a comprehensive overview about these public information security data sources, their characteristics and dependencies. Moreover, comprehensive knowledge about these sources would be beneficial to systematically use and integrate them to information security processes. In this paper, a triangulation study is conducted to identify and analyze public information security data sources. Furthermore, a taxonomy is introduced to classify and compare these data sources based on the following six dimensions: (1) Type of information, (2) Integrability, (3) Timeliness, (4) Originality, (5) Type of Source,and (6) Trustworthiness. In total, 68 public information security data sources were identified and classified. The investigations showed that research and practice rely on a large variety of heterogeneous information security data sources, which makes it more difficult to integrate and use them for information security and risk management processes.

  • 493.
    Schlick, Rupert
    et al.
    Austrian Institute of Technology, AUT.
    Felderer, Michael
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för programvaruteknik.
    Majzik, Istvan
    Budapest University of Technology and Economics, HUN.
    Nardone, Roberto
    Universita degli Studi di Napoli Federico II, ITA.
    Raschke, Alexander
    Universitat Ulm, DEU.
    Snook, Colin
    University of Southampton, GBR.
    Vittorini, Valeria
    Universita degli Studi di Napoli Federico II, ITA.
    A proposal of an example and experiments repository to foster industrial adoption of formal methods2018Inngår i: Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), Springer Verlag , 2018, Vol. 11247, s. 249-272Konferansepaper (Fagfellevurdert)
    Abstract [en]

    Formal methods (in a broad sense) have been around almost since the beginning of computer science. Nonetheless, there is a perception in the formal methods community that take-up by industry is low considering the potential benefits. We take a look at possible reasons and give candidate explanations for this effect. To address the issue, we propose a repository of industry-relevant example problems with an accompanying open data storage for experiment results in order to document, disseminate and compare exemplary solutions from formal model based methods. This would allow potential users from industry to better understand the available solutions and to more easily select and adopt a formal method that fits their needs. At the same time, it would foster the adoption of open data and good scientific practice in this research field. © Springer Nature Switzerland AG 2018.

  • 494.
    Seidi, Nahid
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för programvaruteknik.
    Document-Based Databases In Platform SW Architecture For Safety Related Embedded System2014Independent thesis Basic level (degree of Bachelor)Oppgave
    Abstract [en]

    The project is about the investigation on Document-Based databases, their evaluation criteria and use cases regarding requirements management, SW architecture and test management to set up an (ESLM) Embedded Systems Lifecycle Management tool. The current database used in the ESLM is a graph database called Neo4j, which meets the needs of the current system. The result of studying Document databases turned to the decision of not using a Document database for the system. Instead regarding the requirements, a combination of Graph database and Document database could be the practical solution in future.

  • 495.
    Selander, Nizar
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för programvaruteknik. Ericsson.
    Resource utilization comparison of Cassandra and Elasticsearch2019Independent thesis Basic level (degree of Bachelor), 10 poäng / 15 hpOppgave
    Abstract [en]

    Elasticsearch and Cassandra are two of the widely used databases today withElasticsearch showing a more recent resurgence due to its unique full text searchfeature, akin to that of a search engine, contrasting with the conventional querylanguage-based methods used to perform data searching and retrieval operations.

    The demand for more powerful and better performing yet more feature rich andflexible databases has ever been growing. This project attempts to study how the twodatabases perform under a specific workload of 2,000,000 fixed sized logs and underan environment where the two can be compared while maintaining the results of theexperiment meaningful for the production environment which they are intended for.

    A total of three benchmarks were carried, an Elasticsearch deployment using defaultconfiguration and two Cassandra deployments, a default configuration a long with amodified one which reflects a currently running configuration in production for thetask at hand.

    The benchmarks showed very interesting performance differences in terms of CPU,memory and disk space usage. Elasticsearch showed the best performance overallusing significantly less memory and disk space as well as CPU to some degree.

    However, the benchmarks were done in a very specific set of configurations and a veryspecific data set and workload. Those differences should be considered whencomparing the benchmark results.

  • 496.
    Selvi, Mehmet
    et al.
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för programvaruteknik.
    Büyükcan, Güral
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för programvaruteknik.
    Influential factors affecting the undesired fault correction outcomes in large-scaled companies2014Independent thesis Advanced level (degree of Master (Two Years))Oppgave
    Abstract [en]

    Context. Fault correction process is one of the two main activities in software evolution model. As it is very important for software maintainability, software industry especially large-scaled global companies, aim to have mature fault correction processes that detect faults and correct them in a continuous and efficient way. Considerable amount of effort is needed and some measures should be taken in order to be successful. This master thesis is mainly related with fault correction and finding possible solutions for better process. Objectives. The main aim of this study is to investigate and identify influential factors having affects over undesired fault correction outcomes. This study has three main stages: 1) to identify factors from company data that have affects over target factors, 2) to elicit influential factors from interviews and literature review, 3) to prioritize influential factors based on their significance. Based on the outcomes, giving recommendations to company and software industry is the other aim of this master thesis. Methods. This study mainly reflects the empirical research of software fault correction process and undesired outcomes of it. In this master thesis, both quantitative and qualitative data analysis were performed. Case study was conducted with Ericsson AB that data analysis was made with the archival data by using several methods including Machine Learning and Apriori. Also, surveys and semi-structured interviews were used for data collection instruments. Apart from this, literature review was performed in order to collect influential factors for fault correction process. Prioritization of the influential factors was made by using hierarchical cumulative voting. Results. Throughout the case study, quantitative data analysis, interviews and literature review was conducted and totally 45 influential factors were identified. By using these factors prioritization was performed with 26 practitioners (4 internal and 22 external) in order to find which factors are most a) significant and b) relevant in undesired fault correction outcomes. Based on the outcomes of prioritization, cause-effect diagram was drawn which includes all the important factors. Conclusions. This research showed that there are lots of factors influencing fault correction process. The practitioners mostly complained about the lack of analysis of deeply including correction of faults are not resulted the new requirements and they are not used for process improvement. Also, limited resources (such as work force, vacations and sickness), unbalanced fault correction task assignment and too much fault reports at the same time cause problems. Moreover, priorities of faults and customers affect the lead time of fault correction process as the critical faults are fixed at first.

  • 497.
    Sentilles, Severine
    et al.
    Malardalen Univ, SWE.
    Papatheocharous, Efi
    Swedish Inst Comp Sci, SWE.
    Ciccozzi, Federico
    Malardalen Univ, SWE.
    Petersen, Kai
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för programvaruteknik.
    A Property Model Ontology2016Inngår i: 2016 42ND EUROMICRO CONFERENCE ON SOFTWARE ENGINEERING AND ADVANCED APPLICATIONS (SEAA), 2016, s. 165-172Konferansepaper (Fagfellevurdert)
    Abstract [en]

    Efficient development of high quality software is tightly coupled to the ability of quickly taking complex decisions based on trustworthy facts. In component-based software engineering, the decisions related to selecting the most suitable component among functionally-equivalent ones are of paramount importance. Despite sharing the same functionality, components differ in terms of their extra-functional properties. Therefore, to make informed selections, it is crucial to evaluate extra-functional properties in a systematic way. To date, many properties and evaluation methods that are not necessarily compatible with each other exist. The property model ontology presented in this paper represents the first step towards providing a systematic way to describe extra-functional properties and their evaluation methods, and thus making them comparable. This is beneficial from two perspectives. First, it aids researchers in identifying comparable property models as a guide for empirical evaluations. Second, practitioners are supported in choosing among alternative evaluation methods for the properties of their interest. The use of the ontology is illustrated by instantiating a subset of property models relevant in the automotive domain.

  • 498.
    Settenvini, Matteo
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för programvaruteknik.
    Algorithmic Analysis of Name-Bounded Programs: From Java programs to Petri Nets via π-calculus2014Independent thesis Advanced level (degree of Master (Two Years))Oppgave
    Abstract [en]

    Context. Name-bounded analysis is a type of static analysis that allows us to take a concurrent program, abstract away from it, and check for some interesting properties, such as deadlock-freedom, or watching the propagation of variables across different components or layers of the system. Objectives. In this study we investigate the difficulties of giving a representation of computer programs in a name-bounded variation of π-calculus. Methods. A preliminary literature review is conducted to assess the presence (or lack thereof) of other successful translations from real-world programming languages to π-calculus, as well for the presence of relevant prior art in the modelling of concurrent systems. Results. This thesis gives a novel translation going from a relevant subset of the Java programming language, to its corresponding name-bounded π-calculus equivalent. In particular, the strengths of our translation are being able to dispose of names representing inactive objects when there are no circular references, and a transparent handling of polymorphism and dynamic method resolution. The resulting processes can then be further transformed into their Petri-Net representation, enabling us to check for important properties, such as reachability and coverability of program states. Conclusions. We conclude that some important properties that are not, in general, easy to check for concurrent programs, can be in fact be feasibly determined by giving a more constrained model in π-calculus first, and as Petri Nets afterwards.

  • 499.
    Seyff, Norbert
    et al.
    University of Applied Sciences and Arts Northwestern Switzerland, CHE.
    Stade, Melanie
    University of Applied Sciences and Arts Northwestern Switzerland, CHE.
    Fotrousi, Farnaz
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för programvaruteknik.
    Glinz, Martin
    University of Zurich, CHE.
    Guzman, Emitza
    University of Zurich, CHE.
    Kolpondinos-Huber, Martina
    University of Zurich, CHE.
    Arzapalo, Denisse Muñante
    Fondazione Bruno Kessler, ITA.
    Oriol, Marc
    Universitat Politècnica de Catalunya, ESP.
    Schaniel, Ronnie
    University of Applied Sciences and Arts Northwestern Switzerland, CHE.
    End-user driven feedback prioritization2017Inngår i: CEUR Workshop Proceedings / [ed] Ameller D.,Dieste O.,Knauss E.,Susi A.,Dalpiaz F.,Kifetew F.M.,Tenbergen B.,Palomares C.,Seffah A.,Forbrig P.,Berry D.M.,Daneva M.,Knauss A.,Siena A.,Daun M.,Herrmann A.,Kirikova M.,Groen E.C.,Horkoff J.,Maeder P.,Massacci F.,Ralyte J., CEUR-WS , 2017, Vol. 1796Konferansepaper (Fagfellevurdert)
    Abstract [en]

    End-user feedback is becoming more important for the evolution of software systems. There exist various communication channels for end-users (app stores, social networks) which allow them to express their experiences and requirements regarding a software application. End-users communicate a large amount of feedback via these channels which leads to open issues regarding the use of end-user feedback for software development, maintenance and evolution. This includes investigating how to identify relevant feedback scattered across different feedback channels and how to determine the priority of the feedback issues communicated. In this research preview paper, we discuss ideas for enduser driven feedback prioritization. © Copyright 2017 for this paper by its authors.

  • 500.
    shafiq, Hafiz Adnan
    et al.
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för programvaruteknik.
    Arshad, Zaki
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för programvaruteknik.
    Automated Debugging and Bug Fixing Solutions: A Systematic Literature Review and Classification2013Independent thesis Advanced level (degree of Master (Two Years))Oppgave
    Abstract [en]

    Context: Bug fixing is the process of ensuring correct source code and is done by developer. Automated debugging and bug fixing solutions minimize human intervention and hence minimize the chance of producing new bugs in the corrected program. Scope and Objectives: In this study we performed a detailed systematic literature review. The scope of work is to identify all those solutions that correct software automatically or semi-automatically. Solutions for automatic correction of software do not need human intervention while semi-automatic solutions facilitate a developer in fixing a bug. We aim to gather all such solutions to fix bugs in design, i.e., code, UML design, algorithms and software architecture. Automated detection, isolation and localization of bug are not in our scope. Moreover, we are only concerned with software bugs and excluding hardware and networking domains. Methods: A detailed systematic literature review (SLR) has been performed. A number of bibliographic sources are searched, including Inspec, IEEE Xplore, ACM digital library, Scopus, Springer Link and Google Scholar. Inclusion/exclusion, study quality assessment, data extraction and synthesis have been performed in depth according to guidelines provided for performing SLR. Grounded theory is used to analyze literature data. To check agreement level between two researchers, Kappa analysis is used. Results: Through SLR we identified 46 techniques. These techniques are classified in automated/semi-automated debugging and bug fixing. Strengths and weaknesses of each of them are identified, along with which types of bugs each can fix and in which language they can be implement. In the end, classification is performed which generate a list of approaches, techniques, tools, frameworks, methods and systems. Along, this classification and categorization we separated bug fixing and debugging on the bases of search algorithms. Conclusion: In conclusion achieved results are all automated/semi-automated debugging and bug fixing solutions that are available in literature. The strengths/benefits and weaknesses/limitations of these solutions are identified. We also recognize type of bugs that can be fixed using these solutions. And those programming languages in which these solutions can be implemented are discovered as well. In the end a detail classification is performed.

78910111213 451 - 500 of 644
RefereraExporteraLink til resultatlisten
Permanent link
Referera
Referensformat
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Annet format
Fler format
Språk
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Annet språk
Fler språk
Utmatningsformat
  • html
  • text
  • asciidoc
  • rtf