Change search
Refine search result
1234567 101 - 150 of 534
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Rows per page
  • 5
  • 10
  • 20
  • 50
  • 100
  • 250
Sort
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
Select
The maximal number of hits you can export is 250. When you want to export more records please use the Create feeds function.
  • 101.
    Chodapaneedi, Mani Teja
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Manda, Samhith
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Engagement of Developers in Open Source Projects: A Multi-Case Study2017Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

     In the present world, the companies on using the open source projects have been tend to increase in the innovation and productivity which is beneficial in sustaining the competence. These involve various developers across the globe who may be contributing to several other projects, they constantly engage with the project to improve and uplift the overall project. In each open source project, the level of intensity and the motivation with which the developers engage and contribute vary among time.

    Initially the research is aimed to identify how the engagement and activity of the developers in open source projects vary over time. Secondly to assess the reasons over the variance in engagement activities of the developers involved in various open source projects.

    Firstly, a literature review was conducted to identify the list of available metrics that are helpful to analyse the developer’s engagement in open source projects. Secondly, we conducted a multi-case study, that involved the investigation of developer’s engagement in 10 different open source projects of Apache foundation. The GitHub repositories were mined to gather the data regarding the engagement activities of the developers over the selected projects. To identify the reasons for the variation in engagement and activity of developers, we analysed documentation about each project and also interviewed 10 developers and 5 instructors, who provided additional insights about the challenges faced to contribute in open source projects.

    The results of this research contain the list of factors that affect the developer’s engagement with open source projects which are extracted from the case studies and are strengthened through interviews. From the data that is collected by performing repository mining, the selected projects have been categorized with the increase, decrease activeness of developers among the selected projects. By utilizing the archival data that is collected from the selected projects, the factors corporate support, community involvement, distribution of issues and contributions to open source projects and specificity of guidelines have been identified as the crucial and key factors upon the success of the open source projects reflecting the engagement of contributors. In addition to this finding the insights on using open source projects are also collected from both perspectives of developers and instructors are presented.

     This research had provided us a deeper insight on the working of open source projects and driving factors that influence engagement and activeness of the contributors. It has been evident from this research that the stated factors corporate support, community involvement, distribution of issues and contributions to open source projects and specificity of guidelines impacts the engagement and activeness of the developers. So, the open source projects minimally satisfying these projects can tend to see the increase of the engagement and activeness levels of the contributors. It also helps to seek the existing challenges and benefits upon contributing to open source projects from different perspectives.

  • 102.
    Chunduri, Annapurna
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    An Effective Verification Strategy for Testing Distributed Automotive Embedded Software Functions: A Case Study2016Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Context. The share and importance of software within automotive vehicles is growing steadily. Most functionalities in modern vehicles, especially safety related functions like advanced emergency braking, are controlled by software. A complex and common phenomenon in today’s automotive vehicles is the distribution of such software functions across several Electronic Control Units (ECUs) and consequently across several ECU system software modules. As a result, integration testing of these distributed software functions has been found to be a challenge. The automotive industry neither has infinite resources, nor has the time to carry out exhaustive testing of these functions. On the other hand, the traditional approach of implementing an ad-hoc selection of test scenarios based on the tester’s experience, can lead to test gaps and test redundancies. Hence, there is a pressing need within the automotive industry for a feasible and effective verification strategy for testing distributed software functions.

    Objectives. Firstly, to identify the current approach used to test the distributed automotive embedded software functions in literature and in a case company. Secondly, propose and validate a feasible and effective verification strategy for testing the distributed software functions that would help improve test coverage while reducing test redundan- cies and test gaps.

    Methods. To accomplish the objectives, a case study was conducted at Scania CV AB, Södertälje, Sweden. One of the data collection methods was through conducting interviews of different employees involved in the software testing activities. Based on the research objectives, an interview questionnaire with open-ended and close-ended questions has been used. Apart from interviews, data from relevant ar- tifacts in databases and archived documents has been used to achieve data triangulation. Moreover, to further strengthen the validity of the results obtained, adequate literature support has been presented throughout. Towards the end, a verification strategy has been proposed and validated using existing historical data at Scania.

    Conclusions. The proposed verification strategy to test distributed automotive embedded software functions has given promising results by providing means to identify test gaps and test redundancies. It helps establish an effective and feasible approach to capture function test coverage information that helps enhance the effectiveness of integration testing of the distributed software functions. 

  • 103.
    Chunduri, Annapurna
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Feldt, Robert
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Adenmark, Mikael
    Scania AB, SWE.
    An effective verification strategy for testing distributed automotive embedded software functions: A case study2016In: Lecture Notes in Computer Science / [ed] Amasaki S.,Mikkonen T.,Felderer M.,Abrahamsson P.,Duc A.N.,Jedlitschka A., 2016, p. 233-248Conference paper (Refereed)
    Abstract [en]

    Integration testing of automotive embedded software functions that are distributed across several Electronic Control Unit (ECU) system software modules is a complex and challenging task in today’s automotive industry. They neither have infinite resources, nor have the time to carry out exhaustive testing of these functions. On the other hand, the traditional approach of implementing an ad-hoc selection of test scenarios based on the testers’ experience typically leads to both test gaps and test redundancies. Here, we address this challenge by proposing a verification strategy that enhances the process in order to identify and mitigate such gaps and redundancies in automotive system software testing. This helps increase test coverage by taking more data-driven decisions for integration testing of the functions. The strategy was developed in a case study at a Swedish automotive company that involved multiple data collection steps. After static validation of the proposed strategy it was evaluated on one distributed automotive software function, the Fuel Level Display, and found to be both feasible and effective. © Springer International Publishing AG 2016.

  • 104.
    Cicchetti, Antonio
    et al.
    Malardalen Univ, Vasteras, Sweden..
    Borg, Markus
    SICS Swedish Inst Comp Sci, Kista, Sweden..
    Sentilles, Severine
    Malardalen Univ, Vasteras, Sweden..
    Wnuk, Krzysztof
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Carlson, Jan
    Malardalen Univ, Vasteras, Sweden..
    Papatheocharous, Efi
    SICS Swedish Inst Comp Sci, Kista, Sweden..
    Towards Software Assets Origin Selection Supported by a Knowledge Repository2016In: PROCEEDINGS 2016 1ST INTERNATIONAL WORKSHOP ON DECISION MAKING IN SOFTWARE ARCHITECTURE, IEEE Computer Society, 2016, p. 22-29Conference paper (Refereed)
    Abstract [en]

    Software architecture is no more a mere system specification as resulting from the design phase, but it includes the process by which its specification was carried out. In this respect, design decisions in component-based software engineering play an important role: they are used to enhance the quality of the system, keep the current market level, keep partnership relationships, reduce costs, and so forth. For non trivial systems, a recurring situation is the selection of an asset origin, that is if going for in-house, outsourcing, open-source, or COTS, when in the need of a certain missing functionality. Usually, the decision making process follows a case-by-case approach, in which historical information is largely neglected: hence, it is avoided the overhead of keeping detailed documentation about past decisions, but it is hampered consistency among multiple, possibly related, decisions. The ORION project aims at developing a decision support framework in which historical decision information plays a pivotal role: it is used to analyse current decision scenarios, take well-founded decisions, and store the collected data for future exploitation. In this paper, we outline the potentials of such a knowledge repository, including the information it is intended to be stored in it, and when and how to retrieve it within a decision case.

  • 105.
    Clark, David
    et al.
    UCL, GBR.
    Feldt, Robert
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Poulding, Simon
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Yoo, Shin
    UCL, GBR.
    Information Transformation: An Underpinning Theory for Software Engineering2015In: 2015 IEEE/ACM 37th IEEE International Conference on Software Engineering, Vol 2, IEEE , 2015, p. 599-602Conference paper (Refereed)
    Abstract [en]

    Software engineering lacks underpinning scientific theories both for the software it produces and the processes by which it does so. We propose that an approach based on information theory can provide such a theory, or rather many theories. We envision that such a benefit will be realised primarily through research based on the quantification of information involved and a mathematical study of the limiting laws that arise. However, we also argue that less formal but more qualitative uses for information theory will be useful. The main argument in support of our vision is based on the fact that both a program and an engineering process to develop such a program are fundamentally processes that transform information. To illustrate our argument we focus on software testing and develop an initial theory in which a test suite is input/output adequate if it achieves the channel capacity of the program as measured by the mutual information between its inputs and its outputs. We outline a number of problems, metrics and concrete strategies for improving software engineering, based on information theoretical analyses. We find it likely that similar analyses and subsequent future research to detail them would be generally fruitful for software engineering.

  • 106.
    Cosic Prica, Srdjan
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Video Games and Software Engineers: Designing a study based on the benefits from Video Games and how they can improve Software Engineers2017Independent thesis Basic level (degree of Bachelor), 10 credits / 15 HE creditsStudent thesis
    Abstract [en]

    Context: This is a study about investigating if playing video games can improve any skills and characteristics in a software engineer. Due to lack of resources and time, this study will focus on designing a study that others may use to measure the results and if video games actually can improve software engineers.

    Objectives: The main objectives are finding the benefits of playing video games and how those benefits are discovered. Meaning what types of games and for how long someone needs to play in order to be affected and show improvements. Another objective is to find out what skills are requested and required in a software engineer. Then it is time to design the study based on the information gathered.

    Methods: There is a lot of literature studying involved. The method is parallel research which is when reading about the benefits of playing video games, then also reading and trying to find corresponding benefits in what is requested and required in software engineers.

    Results: There are many cognitive benefits from video games that are also beneficial in software engineers. There is no recorded limit to how long a study can go on playing video games that it proves to have negative consequences. That means that the study designed from the information gathered is very customizable and there are many results that can be measured.

    Conclusions: There is a very high chance that playing video games can result in better software engineers because the benefits that games provide are connected to skills requested and required by employers and other expert software engineers that have been in the business for a long time and have a high responsibilities over other teams of software engineers.

  • 107. Cousin, Philippe
    et al.
    Fricker, Samuel
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Felmy, Dean
    Le Gall, Franck
    Fiedler, Markus
    Blekinge Institute of Technology, Faculty of Computing, Department of Communication Systems.
    Validation and Quality in FI-PPP e-Health Use Case, FI-STAR Project2014Conference paper (Refereed)
  • 108.
    de Carvalho, Renata M.
    et al.
    Univ Quebec, LATECE Lab, Montreal, PQ, Canada..
    Mili, Hafedh
    Univ Quebec, LATECE Lab, Montreal, PQ, Canada..
    Boubaker, Anis
    Univ Quebec, LATECE Lab, Montreal, PQ, Canada..
    Gonzalez-Huerta, Javier
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Ringuette, Simon
    Trisotech Inc, Montreal, PQ, Canada..
    On the analysis of CMMN expressiveness: revisiting workflow patterns2016In: 2016 IEEE 20TH INTERNATIONAL ENTERPRISE DISTRIBUTED OBJECT COMPUTING WORKSHOP (EDOCW), 2016, p. 54-61Conference paper (Refereed)
    Abstract [en]

    Traditional business process modeling languages use an imperative style to specify all possible execution flows, leaving little flexibility to process operators. Such languages are appropriate for low-complexity, high-volume, mostly automated processes. However, they are inadequate for case management, which involves low-volume, high-complexity, knowledge-intensive work processes of today's knowledge workers. OMG's Case Management Model and Notation (CMMN), which uses a declarative style to specify constraints placed at a process execution, aims at addressing this need. To the extent that typical case management situations do include at least some measure of imperative control, it is legitimate to ask whether an analyst working exclusively in CMMN can comfortably model the range of behaviors s/he is likely to encounter. This paper aims at answering this question by trying to express the extensive collection of Workflow Patterns in CMMN. Unsurprisingly, our study shows that the workflow patterns fall into three categories: 1) the ones that are handled by CMMN basic constructs, 2) those that rely on CMMN's engine capabilities and 3) the ones that cannot be handled by current CMMN specification. A CMMN tool builder can propose patterns of the second category as companion modeling idioms, which can be translated behind the scenes into standard CMMN. The third category is problematic, however, since its support in CMMN tools will break model interoperability.

  • 109.
    de la Vara, Jose Luis
    et al.
    Carlos III University of Madrid, ESP.
    Borg, Markus
    SICS Swedish ICT AB, SWE.
    Wnuk, Krzysztof
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Moonen, Leon
    Certus Centre for S oftware V&V, NOR.
    An Industrial Survey of Safety Evidence Change Impact Analysis Practice2016In: IEEE Transactions on Software Engineering, ISSN 0098-5589, E-ISSN 1939-3520, Vol. 42, no 12, p. 1095-1117Article in journal (Refereed)
    Abstract [en]

    Context. In many application domains, critical systems must comply with safety standards. This involves gathering safety evidence in the form of artefacts such as safety analyses, system specifications, and testing results. These artefacts can evolve during a system's lifecycle, creating a need for change impact analysis to guarantee that system safety and compliance are not jeopardised. Objective. We aim to provide new insights into how safety evidence change impact analysis is addressed in practice. The knowledge about this activity is limited despite the extensive research that has been conducted on change impact analysis and on safety evidence management. Method. We conducted an industrial survey on the circumstances under which safety evidence change impact analysis is addressed, the tool support used, and the challenges faced. Results. We obtained 97 valid responses representing 16 application domains, 28 countries, and 47 safety standards. The respondents had most often performed safety evidence change impact analysis during system development, from system specifications, and fully manually. No commercial change impact analysis tool was reported as used for all artefact types and insufficient tool support was the most frequent challenge. Conclusion. The results suggest that the different artefact types used as safety evidence co-evolve. In addition, the evolution of safety cases should probably be better managed, the level of automation in safety evidence change impact analysis is low, and the state of the practice can benefit from over 20 improvement areas.

  • 110.
    Deekonda, Rahul
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Sirigudi, Prithvi Raj
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Assessment of Agile Maturity Models: A Survey2016Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Context. In recent years Agile has gained lots of importance in the fieldof software development. Many organization and software practitioners hasalready adopted agile practice due to its flexibility in nature. Hence, agiledevelopment methodologies have been replaced to traditional developmentmethods. Agile is a family of several methodologies namely Scrum. eXtremeprogramming (XP) and several others. These several methods areembedded with different set of agile practices for the organizations to adoptand implement for their development process. But there is still a need forempirical research to understand the benefits of implementing the Agilepractices which contributes to the overall success of accomplishment of thesoftware project. Several agile maturity models have been published over adecade but not all of the models have been empirically validated. Hence,additional research in the context of agile maturity is essential and needed.

    Objectives. This study focus on providing a comprehensive knowledgeon the Agile Maturity Models which help in guiding the organizations regardingthe implementation of Agile practices. There are several maturitymodels published with different set of Agile practices that are recommendedto the industries. The primary aim is to compare the agile maturity maturitymodels and to investigate how the agile practices are implemented inthe industry Later the benefits and limitations faced by the software practitionersdue to implementation of agile practices are identified.

    Methods. For this particular research an industrial survey was conductedto identify the agile practices that are implemented in the industry. Inaddition, this survey aims at identifying the benefits and limitations of implementingthe agile practices. A literature review is conducted to identifythe order of agile practices recommended from the literature in agile MaturityModels.

    Results. From the available literature nine Maturity Models have beenextracted with their set of recommended agile practices. Then the resultsfrom the survey and literature are compared and analyzed to see if thereexist any commonalities or differences regarding the implementation of agilepractices in a certain order. From the results of the survey the benefitsand limitations of implementing the Agile practices in a particular order areidentified and reported.

    Conclusions. The findings from the literature review and the survey resultsin evaluating the agile maturity models regarding the implementationof agile practices.

  • 111.
    Demirsoy, Ali
    et al.
    Borsa Istanbul, TUR.
    Petersen, Kai
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Semantic Knowledge Management System to Support Software Engineers: Implementation and Static Evaluation through Interviews at Ericsson2018In: e-Informatica Software Engineering Journal, ISSN 1897-7979, E-ISSN 2084-4840, Vol. 12, no 1, p. 237-263Article in journal (Refereed)
    Abstract [en]

    Background: In large-scale corporations in the software engineering context information overload problems occur as stakeholders continuously produce useful information on process life-cycle issues, matters related to specific products under development, etc. Information overload makes finding relevant information (e.g., how did the company apply the requirements process for product X?) challenging, which is in the primary focus of this paper. Contribution: In this study the authors aimed at evaluating the ease of implementing a semantic knowledge management system at Ericsson, including the essential components of such systems (such as text processing, ontologies, semantic annotation and semantic search). Thereafter, feedback on the usefulness of the system was collected from practitioners. Method: A single case study was conducted at a development site of Ericsson AB in Sweden. Results: It was found that semantic knowledge management systems are challenging to implement, this refers in particular to the implementation and integration of ontologies. Specific ontologies for structuring and filtering are essential, such as domain ontologies and ontologies distinct to the organization. Conclusion: To be readily adopted and transferable to practice, desired ontologies need to be implemented and integrated into semantic knowledge management frameworks with ease, given that the desired ontologies are dependent on organizations and domains.

  • 112.
    Dennis, Rojas
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Värdeskapande i agil systemutveckling: En komparativ studie mellan mjukvaruverksamheter i Karlskronaregionen och om hur de ser på värdeskapande2017Independent thesis Basic level (degree of Bachelor), 10 credits / 15 HE creditsStudent thesis
    Abstract [en]

    This thesis searches for the answer of how five companies interpret and deliver value in their software processes. The analysis uses the Software Value Map model that can be used as a tool for decision making in value creation. The purpose is to understand how different decisions affect the value of each delivery and product. By studying economics and decision theories, we understand the importance and impact they have in value creation when products are developed. The result of this study shows that local businesses prioritize customer-based value aspects to generate value. There are also similarities and differences in staff and how companies value different aspects that generate value.

  • 113.
    Devulapally, Gopi Krishna
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Agile in the context of Software Maintenance: A Case Study2015Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Context: Adopting agile practices has proven to be successful for many practitioners both academically and practically in development scenario. But the context of agile practices adoption during software maintenance is partially studied and is mostly focused on benefits. The success factors of agile practices during development cannot be related to maintenance, as maintenance differs in many aspects from development. The context of this research is to study the adoption of different agile practices during software maintenance.

    Objectives: In this study, an attempt has been made to accomplish the following objectives: Firstly, to identify different agile practices that are adopted in practice during software maintenance. Secondly, identifying advantages and disadvantages of adopting those agile practices during software maintenance.

    Methods: To accomplish the objectives a case study is conducted at Capgemini, Mumbai, India. Data is collected by conducting two rounds of interviews among five different projects which have adopted agile practices during software maintenance. Close-ended questionnaire and open-ended questionnaires have been used respectively for first and second round interviews. The motivation for selecting different questionnaire is because each round aimed to accomplish different research objectives. Apart from interviews, direct observation of agile practices in each project is done to achieve data triangulation. Finally, a validation survey is conducted among second round interview participants and other practitioners from outside the case study to validate the data collected during second round interviews.

    Results: A simple literature review identified 30 agile practices adopted during software maintenance. On analyzing first round of interviews 22 practices are identified to be mostly adopted and used during software maintenance. The result of adopting those agile practices are categorized as advantages and disadvantages. In total 12 advantages and 8 disadvantages are identified and validated through second round interviews and validation survey respectively. Finally, a cause-effect relationship is drawn among the identified agile practices and consequences.

    Conclusions: Adopting agile practices has both positive and negative result. Adopting agile practices during perfective and adaptive type of maintenance has more advantages, but adopting agile practices during corrective type of maintenance may not have that many advantages as compared to other type of maintenance. Hence, one has to consider the type of maintenance work before adopting agile practices during software maintenance.

  • 114.
    Diebold, Philipp
    et al.
    Fraunhofer IESE, GER.
    Mendez, Daniel
    Technische Universitat Munchen, GER.
    Wagner, Stefan
    Universitat Stuttgart, GER.
    Šmite, Darja
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Results of the 2nd international workshop on the impact of agile practices (ImpAct 2017)2017In: ACM International Conference Proceeding Series, Association for Computing Machinery (ACM), 2017, Vol. F129907Conference paper (Refereed)
    Abstract [en]

    At present, agile development is a dominating development process in software engineering. Yet, due to different contexts, also agile methods require adaptations (e.g. Scrum-but). Since adaptation means adding, modifying or dropping some agile elements, it is important to know what the effects and importance of these elements are. Given the weak state of empirical evidence in this area, we initiated the workshop series on the Impact of Agile Practices (ImpAct). This paper provides a summary of the second workshop of this series, especially its lightning talks and discussions. The major outcomes include interesting observations such as negatively rated practices and contradicting experiences as well as follow-up activities ordered in a roadmap. © 2017 ACM.

  • 115. Dingsoyr, Torgeir
    et al.
    Moe, Nils Brede
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Towards Principles of Large-Scale Agile Development A Summary of the Workshop at XP2014 and a Revised Research Agenda2014In: AGILE METHODS: LARGE-SCALE DEVELOPMENT, REFACTORING, TESTING, AND ESTIMATION, 2014, p. 1-8Conference paper (Refereed)
    Abstract [en]

    Large projects are increasingly adopting agile development practices, and this raises new challenges for research. The workshop on principles of large-scale agile development focused on central topics in large-scale: the role of architecture, inter-team coordination, portfolio management and scaling agile practices. We propose eight principles for large-scale agile development, and present a revised research agenda.

  • 116. Dingsoyr, Torgeir
    et al.
    Šmite, Darja
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Managing Knowledge in Global Software Development Projects2014In: IT Professional Magazine, ISSN 1520-9202, E-ISSN 1941-045X, Vol. 16, no 1, p. 22-29Article in journal (Refereed)
    Abstract [en]

    How should knowledge be managed in global software development projects? To answer this question, the authors draw on established software engineering research and study three focus groups in two global companies, discussing which knowledge management approaches are appropriate.

  • 117.
    Doležel, Michal
    et al.
    Vysoká škola ekonomická v Praze, CZE.
    Felderer, Michael
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Organizational patterns between developers and testers: Investigating testers' autonomy and role identity2018In: ICEIS 2018 - Proceedings of the 20th International Conference on Enterprise Information Systems / [ed] Filipe J.,Camp O.,Smialek M.,Filipe J.,Hammoudi S., SciTePress , 2018, Vol. 2, p. 336-344Conference paper (Refereed)
    Abstract [en]

    This paper deals with organizational patterns (configurations, set-ups) between developers/programmers and testers. We firstly discuss the key differences between these two Information Systems Development (ISD) occupations. Highlighting the origin of inevitable disagreements between them, we reflect on the nature of the software testing field that currently undergoes an essential change under the increasing influence of agile ISD approaches and methods. We also deal with the ongoing professionalization of software testing. More specifically, we propose that the concept of role identity anchored in (social) identity theory can be applied to the profession of software testers, and their activities studied accordingly. Furthermore, we conceptualize three organizational patterns (i.e. isolated testers, embedded testers, and eradicated testers) based on our selective literature review of research and practice sources in Information Systems (IS) and Software Engineering (SE) disciplines. After summarizing the key industrial challenges of these patterns, we conclude the paper by calling for more research evidence that would demonstrate the viability of the recently introduced novel organizational models. We also argue that especially the organizational model of "combined software engineering", where the roles of programmers and testers are reunited into a single role of "software engineer", deserves a closer attention of IS and SE researchers in the future. © 2018 by SCITEPRESS - Science and Technology Publications, Lda.

  • 118.
    DOMMATA, SANDEEP KUMAR GOUD
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    KONAGALA, SAMARA CHANDRA HASON
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    IMPACT OF GROUP DYNAMICS ON TEAMS WORKING IN SOFTWARE ENGINEERING2014Independent thesis Advanced level (degree of Master (Two Years))Student thesis
    Abstract [en]

    Context: Group dynamics play an important role in software projects. All of the existing software engineering methodologies (like Rational Unified Process, Microsoft Solutions Framework, Agile, etc.) use the concept of the teamwork and emphasize the necessity to manage them in order to organize the business processes in the best way. The application of group dynamic techniques is aimed at improvement of teamwork management to make it more efficient. The implementation of group dynamic techniques has an impact on teams working in software engineering and it also faces some challenges for industry such as lack of resources and preparation. Both need additional investigation which regard to the actual practiced situation in industry. Objectives: The given work is devoted to identification of group dynamics techniques and their impact on teams in the context of industrial software development projects. The objectives of the research is to identify the existing and in an industrial context, actually used group dynamics techniques in software engineering as well as their impact and methods of its evaluation. Since the application of group dynamics techniques is not a trivial task, we also identify those challenges and corresponding mitigation strategies. Methods: The basic methods applied during the research conduction are systematic literature review and survey. Literature review was used in order to collect the data on group dynamics techniques, their impact and implementation challenges. The survey and additional interviews with the practitioners from the software development companies were done with the purpose to find out which of the techniques are applied in practice. Results: Based on the data from systematic literature review we identified group dynamics techniques such as equalizing participation, electronic communication, conflict resolution, summarizing, whole and small group discussions, brainstorming, etc. The discovered impacts include team performance and cohesiveness, staff satisfaction and communication quality, software quality, reasonable decision-making and knowledge sharing. The possible challenges of group dynamics techniques implementation are company’s limited resources, lack of leadership and preparation, over-dominating of some team members and cultural diversity. The survey provided us with additional information about the importance of mentioned group dynamics techniques and their impact on team performance and cohesiveness, job satisfaction and software quality. Conclusions: We conclude that group dynamics techniques in software development projects influence the performance and cohesiveness of the teamwork as well as the quality of the software solutions and products. The possible challenges can be overcome by promotion of open communication and trust among team members, and additional psychological preparation and training of facilitator. The research discovered a slight difference in the literature review and survey results. In particular we found out that, some group dynamics techniques are overestimated in literature, while the others are undervalued. Also the survey results helped to identify the techniques such as small group discussion, conflict resolution and many more were used by the teams of definite size, which was not possible to discover in the SLR for example large teams pay much attention to feedback and electronic communication The obtained results can be used by software engineering practitioners in order to organize and rearrange their teamwork, which can positively affect team performance and project success.

  • 119.
    dos Santos Neto, Pedro de Alcântara
    et al.
    Federal University of Piauí, BRA.
    Britto, Ricardo
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Rabêlo, Ricardo de Andrade Lira
    Federal University of Piauí, BRA.
    Cruz, Jonathas Jivago de Almeida
    Federal University of Piauí, BRA.
    Lira, Lira
    Federal University of Piauí, BRA.
    A hybrid approach to suggest software product line portfolios2016In: Applied Soft Computing, ISSN 1568-4946, E-ISSN 1872-9681, Vol. 49, p. 1243-1255Article in journal (Refereed)
    Abstract [en]

    Software product line (SPL) development is a new approach to software engineering which aims at the development of a whole range of products. However, as long as SPL can be useful, there are many challenges regarding the use of that approach. One of the main problems which hinders the adoption of software product line (SPL) is the complexity regarding product management. In that context, we can remark the scoping problem. One of the existent ways to deal with scoping is the product portfolio scoping (PPS). PPS aims to define the products that should be developed as well as their key features. In general, that approach is driven by marketing aspects, like cost of the product and customer satisfaction. Defining a product portfolio by using the many different available aspects is a NP-hard problem. This work presents an improved hybrid approach to solve the feature model selection problem, aiming at supporting product portfolio scoping. The proposal is based in a hybrid approach not dependent on any particular algorithm/technology. We have evaluated the usefulness and scalability of our approach using one real SPL (ArgoUML-SPL) and synthetic SPLs. As per the evaluation results, our approach is both useful from a practitioner's perspective and scalable. © 2016 Elsevier B.V.

  • 120.
    Duarte, Carlos Henrique C.
    et al.
    BNDES, Brazilian Dev Bank, BRA.
    Gorschek, Tony
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering. Blekinge Institute of Technology, School of Computing.
    Technology Transfer - Requirements Engineering Research to Industrial Practice An Open (Ended) Debate2015In: 2015 IEEE 23RD INTERNATIONAL REQUIREMENTS ENGINEERING CONFERENCE (RE), IEEE , 2015, p. 414-415Conference paper (Refereed)
    Abstract [en]

    Technology and knowledge have been recognized as main sources of competitive advantage of corporations, industries and nations, particularly in the software domain. They have led to the creation of local ecosystems devoted to development and transfer activities, which ensure not only personal and institutional motivation/recognition, but also social and economic gains. An open (ended) debate panel is proposed in order to develop greater awareness and seek deeper understanding of such activities from Requirements Engineering research to industrial practice. The panel involves researchers and practitioners with the perspective of eliciting: (i) experiences in knowledge and technology development and transfer; (ii) awareness and effectiveness of models and patterns; and (iii) factors for having successful collaboration between research institutions and industry. The organizers also plan to run a survey during and after the conference, summarizing their conclusions in specific post-conference reports.

  • 121.
    Duc, Anhnguyen
    et al.
    Norges Teknisk-Naturvitenskapelige Universitet, NOR.
    Jabangwe, Ronald
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Paul, Pangkaj
    Dundalk Institute of Technology, IRE.
    Abrahamsson, Pekka
    Norges Teknisk-Naturvitenskapelige Universitet, NOR.
    Security challenges in IoT development: A software engineering perspective2017In: ACM International Conference Proceeding Series, Association for Computing Machinery (ACM), 2017, Vol. F129907Conference paper (Refereed)
    Abstract [en]

    The rapid growth of Internet-of-things (IoT) software applications has driven both practitioners and researchers' attention to methodological approaches for secure IoT development. Security issues for IoT is special in the way that they include not only software, but also hardware and network concerns. With the aim at proposing a methodological approach for secure IoT application development, we investigated what are security challenges in the context of IoT development. We reviewed literature and investigated two industry cases. The preliminary finding results in a list of 17 security challenges with regards to technical, organizational and methodological perspectives. Cross-case comparison provides initial explanation about the less emphasis on methodological and organizational security concerns in our cases. © 2017 ACM.

  • 122.
    Dwivedula, Chaitanya
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Choday, Anusha
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    A Systematic Literature Review and Industrial Evaluation of Incorporating Lean Methodologies in Software Engineering2014Independent thesis Advanced level (degree of Master (Two Years))Student thesis
    Abstract [en]

    Context: Over the recent years, ‘Lean Software Development’ (LSD) has been emerging as a significant practice in the Software Industry. The inherent nature of ‘Lean’ to efficiently handle frequently changing customer needs by minimizing ‘Waste’ is a major success factor in practicing it in the context of ‘Software Engineering’. In simple words, Lean Software Development is the true translation of Lean Manufacturing and Lean IT principles to Software Engineering. This work presents an in-depth analysis on the implication of lean methodologies from both ‘State of Art’ and ‘State of Practice’ in the context of Software Engineering. Objectives: The prime objective of the study is to investigate what methodologies were considered & adopted under lean philosophy and to present relevant evidence on the implication of lean methodologies in reference to what defines ‘lean’ in Software Engineering. An extensive literature review was aimed to find the existing challenging factors that negatively influenced the success of software projects and the respective lean mitigation methodologies that were employed by various software organizations to appease their negative impact. Industrial interviews were conducted by interviewing lean experts, with a motive to find the current state of lean implementation in software industry. The outcomes from the systematic literature review (State of Art) and the industry (State of Practice) are comparatively analysed to explore the similarities and differences on the state of lean implication. Finally, a set of guidelines are recommended that would benefit an Industrial Practitioner/Stakeholder/Academic Researcher in practicing the appropriate lean methodology in the context of software engineering. Methods: We conducted a ‘Systematic literature review’ (SLR) by systematically analyzing relevant studies and then interviewed industrial experts to validate our findings. The systematic literature review was conducted according to the guidelines proposed by Dr. Barbara Kitchenham stated in ‘Guidelines for performing Systematic Literature Reviews’ article. The thorough review helped us in identifying various challenging factors that negatively influenced the success of software projects and the respective lean mitigation methodologies that were practiced in the context of software engineering. The associated benefits of practicing the lean methodologies are also presented. The extensive review included peer reviewed articles from electronic databases such as IEEE Explore, Inspec, Scopus and ISI. In addition to this, we conducted snowball sampling on the references of the selected articles to avoid the potential risk of losing relevant and valuable information. Also, other potential sources of information such as books, theses/dissertations, white papers and website/blog articles are included as a part of Grey Literature. In this study, the articles related to the implication of lean methodologies in the context of software engineering were considered. The review included 72 primary studies published between 1993 and 2012. The primary studies were selected based on the following criteria: If they presented the challenging factors that negatively influenced the success of software projects. If they depicted the implication of lean mitigation methodologies (Tool/ Technique/ Method/ Process/ Practice/ Principle) that appeased the negative impact of the identified challenging factors that hampered the success of software projects. If they depicted the implication of lean methodologies (Tool/ Technique/ Method/ Process/ Practice/ Principle) in general or for a specific development/ Management/ Maintenance improvement activities that lead to the success of software projects in the context of software engineering. If they presented the benefits of practicing lean methodologies in the context of software engineering. The study quality assessment was done based on the quality criteria defined in the ‘Quality assessment criteria checklist’. The data such as Article ID, Article Title, Literature type (Peer- reviewed, Non-peer reviewed), Context of validation of the lean methodology (Industry/Academia), Subjects considered for the study (Researchers/students, Industrial practitioners), Type of article publication (Conference/ Journal/ Books/ Thesis Reports/ Doctoral dissertations/ Other), Research method used in the study (Case Study/ Experiment/ Experience Report/ Not stated/ Secondary Data Analysis/ Literature Review), Context of conducting the research (Industry/ Academia/ Not stated/ Both), Context of validation of the study (Strong/ Medium/ Weak), Publication date & year, Source of the publication, are extracted as a part of Quantitative analysis. The secondary data analysis for both ‘State of Art’ (Systematic literature review) and ‘State of Practice’ (Industry) was carried by performing a generic data analysis designed to answer our research questions. The more specific data such as the challenging factors that negatively influenced the success of software projects, the type of lean contribution presented i.e., the methodology being a Tool, Technique, Practice, Principal, Process or a Method, along with the benefits associated on their implication that helped us to answer our research questions are extracted as a part of qualitative analysis from the selected studies. The industrial interviews were conducted by interviewing potential lean experts who had decent experience in lean software development, to find the current state of lean implication in the software industry. In the end, a comparative analysis was performed to clearly understand the state of convergence and divergence between the results from extensive literature review and the industry with respect to the implication of lean methodologies in the context of software engineering. Results: A total of 72 primary articles were selected for data extraction. 56 articles were selected from the electronic databases that clearly depicted lean implementation in the context of software engineering. 9 articles were selected by conducting snowball sampling i.e. by scrutinizing the references of the selected primary studies and finally the grey literature resulted in 7 articles. Most of the articles discussed about lean implication in the context of software engineering. The depicted lean methodologies were validated in either Industry or Academia. A few articles depicted regarding lean principles and their benefits in the context of software engineering. Most of the selected articles in our study were peer- reviewed. Peer reviewing is a process of evaluating one’s work or performance by an expert in the same field in order to maintain or enhance the quality of work or performance in the particular field. This indicates that the articles considered for data extraction have been reviewed by potential experts in the research domain. Conclusions: This study provided a deeper insight into lean implication in the context of software engineering. The aim of the thesis is to find the challenging factors that negatively influenced the success of software projects. A total of 54 challenges were identified from the literature review. The 72 primary articles selected from various resources yielded 53 lean methodologies. The lean methodologies were grouped into Principles, practices, tools and methods. Mapping between the identified challenges and the mitigation lean methodologies is presented. Industrial interviews were conducted to find the current state of lean implication in software engineering. A total of 30 challenges were identified from the industry. A total of 40 lean methodologies were identified from the interviews. Comparative analysis was done to find the common challenges and mitigation lean methodologies between the State of art and State of practice. Based on the analysis a set of guidelines are presented at the end of the document. The guidelines benefit an industrial practitioner in practicing the appropriate lean methodology. Keywords: Lean Methodology, Lean software development, lean software management, lean software engineering, Systematic literature review, literature review.

  • 123.
    Eada, Priyanudeep
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Experiment to evaluate an Innovative Test Framework: Automation of non-functional testing2015Independent thesis Advanced level (degree of Master (Two Years)), 80 credits / 120 HE creditsStudent thesis
    Abstract [en]

    Context. Performance testing, among other types of non-functional testing, is necessary to assess software quality. Most often, manual approach is employed to test a system for its performance. This approach has several setbacks. The existing body of knowledge lacks empirical evidence on automation of non-functional testing and is largely focused on functional testing.

    Objectives. The objective of the present study is to evaluate a test framework that automates performance testing. A large-scale distributed project is selected as the context to achieve this objective. The rationale for choosing such a project is that the proposed test framework was designed with an intention to adapt and tailor according to any project’s characteristics.

    Methods. An experiment was conducted with 15 participants at Ericsson R&D department, India to evaluate an automated test framework. Repeated measures design with counter balancing method was used to understand the accuracy and time taken while using the test framework. To assess the ease-of-use of the proposed framework, a questionnaire was distributed among the experiment participants. Statistical techniques were used to accept or reject the hypothesis. The data analysis was performed using Microsoft Excel.

    Results. It is observed that the automated test framework is superior to the traditional manual approach. There is a significant reduction in the average time taken to run a test case. Further, the number of errors resulting in a typical testing process is minimized. Also, the time spent by a tester during the actual test is phenomenally reduced while using the automated approach. Finally, as perceived by software testers, the automated approach is easier to use when compared to the manual test approach.

    Conclusions. It can be concluded that automation of non-functional testing will result in overall reduction in project costs and improves quality of software tested. This will address important performance aspects such as system availability, durability and uptime. It was observed that it is not sufficient if the software meets the functional requirements, but is also necessary to conform to the non-functional requirements.

  • 124.
    Einarsson, Joel
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Winger-Lang, Johannes
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    En jämförande prestandastudie mellan JSON och XML2014Independent thesis Basic level (degree of Bachelor)Student thesis
    Abstract [sv]

    När man utvecklar en ny produkt eller tjänst står man ofta inför valet av dataformat. De mest använda idag är JSON och XML. Formaten ser väldigt olika ut, erbjuder olika funktioner, men används inte sällan till samma sak. Vilket som egentligen är snabbast finns det mycket åsikter om, men inte lika mycket testresultat. Den luckan skall detta arbete täcka. Programmeringsspråken som används är Python och JavaScript, vilka båda är populära på webben. Genom experiment testas hur snabbt JSON och XML kan kodas och avkodas. Testerna går ut på att XML och JSON konverteras till en lämplig intern datastruktur. I JavaScript är det ett Object, för Python är det en dictionary. Det testas även att konvertera från datastrukturen, till XML och JSON. Både stora och små datamängder testas. Resultaten visar enhälligt att JSON är betydligt snabbare än XML, upp mot en faktor 100. För JavaScript gäller detta i de tre stora webbläsarna som testas; Google Chrome, Mozilla Firefox samt Internet Explorer. Det stämmer även för Python. Resultatet är detsamma för liten datamängd som för en stor datamängd.

  • 125.
    Eivazzadeh, Shahryar
    et al.
    Blekinge Institute of Technology, Faculty of Engineering, Department of Health.
    Anderberg, Peter
    Blekinge Institute of Technology, Faculty of Engineering, Department of Health.
    Larsson, Tobias
    Blekinge Institute of Technology, Faculty of Engineering, Department of Mechanical Engineering.
    Fricker, Samuel
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering. University of Applied Sciences and Arts Northwestern Switzerland.
    Berglund, Johan
    Blekinge Institute of Technology, Faculty of Engineering, Department of Health.
    Evaluating Health Information Systems Using Ontologies2016In: JMIR Medical Informatics, ISSN 2291-9694, Vol. 4, no 2, article id e20Article in journal (Refereed)
    Abstract [en]

    Background: There are several frameworks that attempt to address the challenges of evaluation of health information systems by offering models, methods, and guidelines about what to evaluate, how to evaluate, and how to report the evaluation results. Model-based evaluation frameworks usually suggest universally applicable evaluation aspects but do not consider case-specific aspects. On the other hand, evaluation frameworks that are case specific, by eliciting user requirements, limit their output to the evaluation aspects suggested by the users in the early phases of system development. In addition, these case-specific approaches extract different sets of evaluation aspects from each case, making it challenging to collectively compare, unify, or aggregate the evaluation of a set of heterogeneous health information systems.

    Objectives: The aim of this paper is to find a method capable of suggesting evaluation aspects for a set of one or more health information systems—whether similar or heterogeneous—by organizing, unifying, and aggregating the quality attributes extracted from those systems and from an external evaluation framework.

    Methods: On the basis of the available literature in semantic networks and ontologies, a method (called Unified eValuation using Ontology; UVON) was developed that can organize, unify, and aggregate the quality attributes of several health information systems into a tree-style ontology structure. The method was extended to integrate its generated ontology with the evaluation aspects suggested by model-based evaluation frameworks. An approach was developed to extract evaluation aspects from the ontology that also considers evaluation case practicalities such as the maximum number of evaluation aspects to be measured or their required degree of specificity. The method was applied and tested in Future Internet Social and Technological Alignment Research (FI-STAR), a project of 7 cloud-based eHealth applications that were developed and deployed across European Union countries.

    Results: The relevance of the evaluation aspects created by the UVON method for the FI-STAR project was validated by the corresponding stakeholders of each case. These evaluation aspects were extracted from a UVON-generated ontology structure that reflects both the internally declared required quality attributes in the 7 eHealth applications of the FI-STAR project and the evaluation aspects recommended by the Model for ASsessment of Telemedicine applications (MAST) evaluation framework. The extracted evaluation aspects were used to create questionnaires (for the corresponding patients and health professionals) to evaluate each individual case and the whole of the FI-STAR project.

    Conclusions: The UVON method can provide a relevant set of evaluation aspects for a heterogeneous set of health information systems by organizing, unifying, and aggregating the quality attributes through ontological structures. Those quality attributes can be either suggested by evaluation models or elicited from the stakeholders of those systems in the form of system requirements. The method continues to be systematic, context sensitive, and relevant across a heterogeneous set of health information systems.

  • 126.
    Elmgren, Gustav
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Isaksson, Conny
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    A ticket to blockchains2018Independent thesis Basic level (degree of Bachelor), 10 credits / 15 HE creditsStudent thesis
  • 127. Engström, Emelie
    et al.
    Petersen, Kai
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Mapping software testing practice with software testing research: SERP-test taxonomy2015In: 2015 IEEE 8th International Conference on Software Testing, Verification and Validation Workshops, ICSTW 2015 - Proceedings, IEEE Computer Society, 2015, p. Article number 7107470-Conference paper (Refereed)
    Abstract [en]

    There is a gap between software testing research and practice. One reason is the discrepancy between how testing research is reported and how testing challenges are perceived in industry. We propose the SERP-test taxonomy to structure information on testing interventions and practical testing challenges from a common perspective and thus bridge the communication gap. To develop the taxonomy we follow a systematic incremental approach. The SERP-test taxonomy may be used by both researchers and practitioners to classify and search for testing challenges or interventions. The SERP-test taxonomy also supports comparison of testing interventions by providing an instrument for assessing the distance between them and thus identify relevant points of comparisons. © 2015 IEEE.

  • 128.
    Engström, Emelie
    et al.
    Lund University, SWE.
    Petersen, Kai
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Ali, Nauman
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Bjarnason, Elizabeth
    Lund University, SWE.
    SERP-test: a taxonomy for supporting industry-academia communication2017In: Software quality journal, ISSN 0963-9314, E-ISSN 1573-1367, Vol. 25, no 4, p. 1269-1305Article in journal (Refereed)
    Abstract [en]

    This paper presents the construction and evaluation of SERP-test, a taxonomy aimed to improve communication between researchers and practitioners in the area of software testing. SERP-test can be utilized for direct communication in industry academia collaborations. It may also facilitate indirect communication between practitioners adopting software engineering research and researchers who are striving for industry relevance. SERP-test was constructed through a systematic and goal-oriented approach which included literature reviews and interviews with practitioners and researchers. SERP-test was evaluated through an online survey and by utilizing it in an industry–academia collaboration project. SERP-test comprises four facets along which both research contributions and practical challenges may be classified: Intervention, Scope, Effect target and Context constraints. This paper explains the available categories for each of these facets (i.e., their definitions and rationales) and presents examples of categorized entities. Several tasks may benefit from SERP-test, such as formulating research goals from a problem perspective, describing practical challenges in a researchable fashion, analyzing primary studies in a literature review, or identifying relevant points of comparison and generalization of research.

  • 129.
    Enoiu, Eduard Paul
    et al.
    Malardalens hogskola, Software Testing Laboratory, Vasteras, Sweden .
    Sundmark, Daniel
    Swedish Institute of Computer Science, Kista, Sweden .
    Čaušević, Adnan
    Malardalens hogskola, Vasteras, Sweden .
    Feldt, Robert
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Pettersson, Paul
    Malardalens hogskola, Vasteras, Sweden .
    Mutation-based test generation for PLC embedded software using model checking2016In: Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) / [ed] Wotawa F.,Kushik N.,Nica M., 2016, Vol. 9976, p. 155-171Conference paper (Refereed)
    Abstract [en]

    Testing is an important activity in engineering of industrial embedded software. In certain application domains (e.g., railway industry) engineering software is certified according to safety standards that require extensive software testing procedures to be applied for the development of reliable systems. Mutation analysis is a technique for creating faulty versions of a software for the purpose of examining the fault detection ability of a test suite. Mutation analysis has been used for evaluating existing test suites, but also for generating test suites that detect injected faults (i.e., mutation testing). To support developers in software testing, we propose a technique for producing test cases using an automated test generation approach that operates using mutation testing for software written in IEC 61131-3 language, a programming standard for safety-critical embedded software, commonly used for Programmable Logic Controllers (PLCs). This approach uses the Uppaal model checker and is based on a combined model that contains all the mutants and the original program. We applied this approach in a tool for testing industrial PLC programs and evaluated it in terms of cost and fault detection. For realistic validation we collected industrial experimental evidence on how mutation testing compares with manual testing as well as automated decision-coverage adequate test generation. In the evaluation, we used manually seeded faults provided by four industrial engineers. The results show that even if mutation-based test generation achieves better fault detection than automated decision coverage-based test generation, these mutation-adequate test suites are not better at detecting faults than manual test suites. However, the mutation-based test suites are significantly less costly to create, in terms of testing time, than manually created test suites. Our results suggest that the fault detection scores could be improved by considering some new and improved mutation operators (e.g., Feedback Loop Insertion Operator (FIO)) for PLC programs as well as higher-order mutations.

  • 130.
    Envall, Nicklas
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Is Gamification Useful for Increasing Customer Feedback?: A case study based on people’s perception of gamified elements.2018Independent thesis Basic level (university diploma), 10 credits / 15 HE creditsStudent thesis
  • 131.
    Fagerlund, Martin
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    GDPR och Framtidssäkrade Webbapplikationer2018Independent thesis Basic level (university diploma), 10 credits / 15 HE creditsStudent thesis
  • 132.
    Falessi, Davide
    et al.
    CalPoly, USA.
    Juristo, Natalia
    Universidad Politecnica de Madrid, ESP.
    Wohlin, Claes
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Turhan, Burak
    Oulun Yliopisto, FIN.
    Münch, Jürgen
    Helsingin Yliopisto, FIN.
    Jedlitschka, Andreas
    Fraunhofer-Institut fur Experimentelles Software Engineering, DEU.
    Oivo, Markku
    Oulun Yliopisto, FIN.
    Empirical software engineering experts on the use of students and professionals in experiments2018In: Journal of Empirical Software Engineering, ISSN 1382-3256, E-ISSN 1573-7616, Vol. 23, no 1, p. 452-489Article in journal (Refereed)
    Abstract [en]

    [Context] Controlled experiments are an important empirical method to generate and validate theories. Many software engineering experiments are conducted with students. It is often claimed that the use of students as participants in experiments comes at the cost of low external validity while using professionals does not. [Objective] We believe a deeper understanding is needed on the external validity of software engineering experiments conducted with students or with professionals. We aim to gain insight about the pros and cons of using students and professionals in experiments. [Method] We performed an unconventional, focus group approach and a follow-up survey. First, during a session at ISERN 2014, 65 empirical researchers, including the seven authors, argued and discussed the use of students in experiments with an open mind. Afterwards, we revisited the topic and elicited experts’ opinions to foster discussions. Then we derived 14 statements and asked the ISERN attendees excluding the authors, to provide their level of agreement with the statements. Finally, we analyzed the researchers’ opinions and used the findings to further discuss the statements. [Results] Our survey results showed that, in general, the respondents disagreed with us about the drawbacks of professionals. We, on the contrary, strongly believe that no population (students, professionals, or others) can be deemed better than another in absolute terms. [Conclusion] Using students as participants remains a valid simplification of reality needed in laboratory contexts. It is an effective way to advance software engineering theories and technologies but, like any other aspect of study settings, should be carefully considered during the design, execution, interpretation, and reporting of an experiment. The key is to understand which developer population portion is being represented by the participants in an experiment. Thus, a proposal for describing experimental participants is put forward.

  • 133.
    Felderer, Michael
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Gurov, Dilian
    KTH, SWE.
    Huisman, Marieke
    University of Twente, NLD.
    Lisper, Björn
    Mälardalens högskola, SWE.
    Schlick, Rupert
    Austrian Institute of Technology, AUT.
    Formal methods in industrial practice: Bridging the gap (track summary)2018In: Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) / [ed] Margaria T.,Steffen B., Springer Verlag , 2018, Vol. 11247, p. 77-81Conference paper (Refereed)
    Abstract [en]

    Already for many decades, formal methods are considered to be the way forward to help the software industry to make more reliable and trustworthy software. However, despite this strong belief, and many individual success stories, no real change in industrial software development seems to happen. In fact, the software industry is moving fast forward itself, and the gap between what formal methods can achieve, and the daily software development practice does not seem to get smaller (and might even be growing). © Springer Nature Switzerland AG 2018.

  • 134.
    Felderer, Michael
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering. Blekinge Institute of Technology.
    Herrmann, Andrea
    Herrmann & Ehrlich, DEU.
    Comprehensibility of system models during test design: A controlled experiment comparing UML activity diagrams and state machines2018In: Software quality journal, ISSN 0963-9314, E-ISSN 1573-1367, p. 1-23Article in journal (Refereed)
    Abstract [en]

    UML activity diagrams and state machines are both used for modeling system behavior from the user perspective and are frequently the basis for deriving system test cases. In practice, system test cases are often derived manually from UML activity diagrams or state machines. For this task, comprehensibility of respective models is essential and a relevant question for practice to support model selection and design, as well as subsequent test derivation. Therefore, the objective of this paper is to compare the comprehensibility of UML activity diagrams and state machines during manual test case derivation. We investigate the comprehensibility of UML activity diagrams and state machines in a controlled student experiment. Three measures for comprehensibility have been investigated: (1) the self-assessed comprehensibility, (2) the actual comprehensibility measured by the correctness of answers to comprehensibility questions, and (3) the number of errors made during test case derivation. The experiment was performed and internally replicated with overall 84 participants divided into three groups at two institutions. Our experiment indicates that activity diagrams are more comprehensible but also more error-prone with regard to manual test case derivation and discusses how these results can improve system modeling and test case design.

  • 135.
    Felderer, Michael
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Holmström Olsson, Helena
    Malmö universtitet, SWE.
    Rabiser, Rick
    Johannes Kepler Universitat, AUT.
    Introduction to the special issue on quality engineering and management of software-intensive systems2019In: Journal of Systems and Software, ISSN 0164-1212, E-ISSN 1873-1228, Vol. 149, p. 533-534Article in journal (Refereed)
  • 136.
    Felderer, Michael
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Marculescu, Bogdan
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Gomes De Oliveira Neto, Francisco
    Chalmers Univ Technol, SWE.
    Feldt, Robert
    Chalmers, SWE.
    Torkar, Richard
    Chalmers, SWE.
    A testability analysis framework for non-functional properties2018In: Proceedings - 2018 IEEE 11th International Conference on Software Testing, Verification and Validation Workshops, ICSTW 2018, Institute of Electrical and Electronics Engineers Inc. , 2018, p. 54-58Conference paper (Refereed)
    Abstract [en]

    This paper presents background, the basic steps and an example for a testability analysis framework for non-functional properties.

  • 137.
    Felderer, Michael
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Pfahl, Dietmar
    University of Tartu, EST.
    Special Section: Automation and Analytics for Greener Software Engineering2018In: Information and Software Technology, ISSN 0950-5849, E-ISSN 1873-6025, Vol. 95, p. 106-107Article in journal (Other academic)
  • 138.
    Feldt, Robert
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Do System Test Cases Grow Old?2014Conference paper (Refereed)
    Abstract [en]

    Companies increasingly use either manual or automated system testing to ensure the quality of their software products. As a system evolves and is extended with new features the test suite also typically grows as new test cases are added. To ensure software quality throughout this process the test suite is continously executed, often on a daily basis. It seems likely that newly added tests would be more likely to fail than older tests but this has not been investigated in any detail on large-scale, industrial software systems. Also it is not clear which methods should be used to conduct such an analysis. This paper proposes three main concepts that can be used to investigate aging effects in the use and failure behavior of system test cases: test case activation curves, test case hazard curves, and test case half-life. To evaluate these concepts and the type of analysis they enable we apply them on an industrial software system containing more than one million lines of code. The data sets comes from a total of 1,620 system test cases executed a total of more than half a million times over a time period of two and a half years. For the investigated system we find that system test cases stay active as they age but really do grow old; they go through an infant mortality phase with higher failure rates which then decline over time. The test case half-life is between 5 to 12 months for the two studied data sets.

  • 139.
    Feldt, Robert
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Poulding, Simon M.
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Clark, David
    UCL, GBR.
    Yoo, Shin
    Korea Adv Inst Sci & Technol, KOR.
    Test Set Diameter: Quantifying the Diversity of Sets of Test Cases2016In: Proceedings - 2016 IEEE International Conference on Software Testing, Verification and Validation, ICST, IEEE Computer Society, 2016, p. 223-233Conference paper (Refereed)
    Abstract [en]

    A common and natural intuition among software testers is that test cases need to differ if a software system is to be tested properly and its quality ensured. Consequently, much research has gone into formulating distance measures for how test cases, their inputs and/or their outputs differ. However, common to these proposals is that they are data type specific and/or calculate the diversity only between pairs of test inputs, traces or outputs. We propose a new metric to measure the diversity of sets of tests: the test set diameter (TSDm). It extends our earlier, pairwise test diversity metrics based on recent advances in information theory regarding the calculation of the normalized compression distance (NCD) for multisets. A key advantage is that TSDm is a universal measure of diversity and so can be applied to any test set regardless of data type of the test inputs (and, moreover, to other test-related data such as execution traces). But this universality comes at the cost of greater computational effort compared to competing approaches. Our experiments on four different systems show that the test set diameter can help select test sets with higher structural and fault coverage than random selection even when only applied to test inputs. This can enable early test design and selection, prior to even having a software system to test, and complement other types of test automation and analysis. We argue that this quantification of test set diversity creates a number of opportunities to better understand software quality and provides practical ways to increase it.

  • 140.
    Feldt, Robert
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Zimmermann, T.
    Microsoft Research, Redmond, WA, USA.
    Bergersen, G. R.
    University of Oslo, NOR.
    Falessi, D.
    California Polytechnic State University, USA.
    Jedlitschka, A.
    Fraunhofer Institute for Experimental Software Engineering, DEU.
    Juristo, N.
    Universidad Politécnica de Madrid, ESP.
    Münch, J.
    Reutlingen University, DEU.
    Oivo, M.
    University of Oulu, FIN.
    Runeson, P.
    Lunds universitet, SWE.
    Shepperd, M.
    Brunel University, GBR.
    Sjøberg, D. I. K.
    University of Oslo, NOR.
    Turhan, B.
    Monash University, AUS.
    Four commentaries on the use of students and professionals in empirical software engineering experiments2018In: Journal of Empirical Software Engineering, ISSN 1382-3256, E-ISSN 1573-7616, Vol. 23, no 6, p. 3801-3820Article in journal (Other academic)
  • 141.
    Felizardo, Katia
    et al.
    Federal University of Technology, BRA.
    De Souza, Erica
    Federal University of Technology, BRA.
    Falbo, Ricardo
    Federal University of Esp'rito Santo, BRA.
    Vijaykumar, Nandamudi
    Instituto Nacional de Pesquisas Espaciais, BRA.
    Mendes, Emilia
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering. Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Nakagawa, Elisayumi
    Universidade de Sao Paulo, BRA.
    Defining protocols of systematic literature reviews in software engineering: A survey2017In: Proceedings - 43rd Euromicro Conference on Software Engineering and Advanced Applications, SEAA 2017 / [ed] Felderer, M; Olsson, HH; Skavhaug, A, Institute of Electrical and Electronics Engineers Inc. , 2017, p. 202-209, article id 8051349Conference paper (Refereed)
    Abstract [en]

    Context: Despite being defined during the first phase of the Systematic Literature Review (SLR) process, the protocol is usually refined when other phases are performed. Several researchers have reported their experiences in applying SLRs in Software Engineering (SE) however, there is still a lack of studies discussing the iterative nature of the protocol definition, especially how it should be perceived by researchers conducting SLRs. Objective: The main goal of this study is to perform a survey aiming to identify: (i) the perception of SE researchers related to protocol definition; (ii) the activities of the review process that typically lead to protocol refinements; and (iii) which protocol items are refined in those activities. Method: A survey was performed with 53 SE researchers. Results: Our results show that: (i) protocol definition and pilot test are the two activities that most lead to further protocol refinements; (ii) data extraction form is the most modified item. Besides that, this study confirmed the iterative nature of the protocol definition. Conclusions: An iterative pilot testcan facilitate refinements in the protocol. © 2017 IEEE.

  • 142.
    Femmer, Henning
    et al.
    Technical University Munich, GER.
    Unterkalmsteiner, Michael
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Gorschek, Tony
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Which requirements artifact quality defects are automatically detectable?: A case study2017In: Proceedings - 2017 IEEE 25th International Requirements Engineering Conference Workshops, REW 2017, IEEE, 2017, p. 400-406, article id 8054884Conference paper (Refereed)
    Abstract [en]

    [Context:] The quality of requirements engineeringartifacts, e.g. requirements specifications, is acknowledged tobe an important success factor for projects. Therefore, manycompanies spend significant amounts of money to control thequality of their RE artifacts. To reduce spending and improvethe RE artifact quality, methods were proposed that combinemanual quality control, i.e. reviews, with automated approaches.[Problem:] So far, we have seen various approaches to auto-matically detect certain aspects in RE artifacts. However, westill lack an overview what can and cannot be automaticallydetected. [Approach:] Starting from an industry guideline forRE artifacts, we classify 166 existing rules for RE artifacts alongvarious categories to discuss the share and the characteristics ofthose rules that can be automated. For those rules, that cannotbe automated, we discuss the main reasons. [Contribution:] Weestimate that 53% of the 166 rules can be checked automaticallyeither perfectly or with a good heuristic. Most rules need onlysimple techniques for checking. The main reason why some rulesresist automation is due to imprecise definition. [Impact:] Bygiving first estimates and analyses of automatically detectable andnot automatically detectable rule violations, we aim to provide anoverview of the potential of automated methods in requirementsquality control.

  • 143. Feyh, Markus
    et al.
    Petersen, Kai
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Lean Software Development Measures and Indicators: A Systematic Mapping Study2013Conference paper (Refereed)
    Abstract [en]

    Background: Lean Software Development (LSD) aims for improvement, yet this improvement requires measures to identify whether a difference has been achieved, and provide decision support for further improvement. Objective: This study identifies measures and indicators proposed in literature on LSD, then structures them according to ISO/IEC 15939, allowing for comparability due to a use of a standard. Method: Systematic mapping is the research methodology. Result: The published literature on LSD measures has significantly increased since 2010. The two pre-dominant study types are evaluation research and experience reports. 22 base measures, 13 derived measures, and 14 indicators were identified. Conclusion: Gaps exist with respect to LSD principles. In particular: deferring commitment, respecting people and knowledge creation. The principle of delivering fast is well supported.

  • 144.
    Filho, Juarez
    et al.
    Universidade Federal do Ceara, BRA.
    Rocha, Lincoln Souza
    Universidade Federal do Ceara, BRA.
    Andrade, Rossana
    Universidade Federal do Ceara, BRA.
    Britto, Ricardo
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Preventing erosion in exception handling design using static-architecture conformance checking2017In: Lecture Notes Computer Science / [ed] Rogerio de Lemos R.,Lopes A., Springer Verlag , 2017, Vol. 10475, p. 67-83Conference paper (Refereed)
    Abstract [en]

    Exception handling is a common error recovery technique employed to improve software robustness. However, studies have reported that exception handling is commonly neglected by developers and is the least understood and documented part of a software project. The lack of documentation and difficulty in understanding the exception handling design can lead developers to violate important design decisions, triggering an erosion process in the exception handling design. Architectural conformance checking provides means to control the architectural erosion by periodically checking if the actual architecture is consistent with the planned one. Nevertheless, available approaches do not provide a proper support for exception handling conformance checking. To fulfill this gap, we propose ArCatch: an architectural conformance checking solution to deal with the exception handling design erosion. ArCatch provides: (i) a declarative language for expressing design constraints regarding exception handling; and (ii) a design rule checker to automatically verify the exception handling conformance. To evaluate the usefulness and effectiveness of our approach, we conducted a case study, in which we evaluated an evolution scenario composed by 10 versions of an existing web-based Java system. Each version was checked against the same set of exception handling design rules. Based on the results and the feedback given by the system’s architect, the ArCatch proved useful and effective in the identification of existing exception handling erosion problems and locating its causes in the source code. © 2017, Springer International Publishing AG.

  • 145.
    Flygare, Robin
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Holmqvist, Anthon
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Performance characteristics between monolithic and microservice-based systems2017Independent thesis Basic level (degree of Bachelor), 10 credits / 15 HE creditsStudent thesis
    Abstract [en]

    A new promising technology to face the problem of scalability and availability is the microservice architecture. The problem with this architecture is that there is no significant study that clearly proves the performance differences compared to the monolithic architecture.

    Our thesis aims to provide a more conclusive answer of how the microservice architecture differs performance wise compared to the monolithic architecture.

    In this study, we conducted several experiments on a self-developed microservice and monolithic system. We used JMeter to simulate users and after running the tests we looked at the latency, successful throughput for the tests and measured the RAM and CPU usage with Datadog.

    Results that were found, were that the microservice architecture can be more beneficial than the monolithic architecture. Docker was also proven to not have any negative impact on performance and computer cluster can improve performance. 

    We have presented a conclusive answer that microservices can be better in some cases than a monolithic architecture.

  • 146.
    Forsberg, Fredrik
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Alvarez Gonzalez, Pierre
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Unsupervised Machine Learning: An Investigation of Clustering Algorithms on a Small Dataset2018Independent thesis Basic level (degree of Bachelor), 10 credits / 15 HE creditsStudent thesis
    Abstract [en]

    Context: With the rising popularity of machine learning, looking at its shortcomings is valuable in seeing how well machine learning is applicable. Is it possible to apply the clustering with a small dataset?

    Objectives: This thesis consists of a literature study, a survey and an experiment. It investigates how two different unsupervised machine learning algorithms DBSCAN(Density-Based Spatial Clustering of Applications with Noise) and K-means run on a dataset gathered from a survey.

    Methods: Making a survey where we can see statistically what most people chose and apply clustering with the data from the survey to confirm if the clustering has the same patterns as what people have picked statistically.

    Results: It was possible to identify patterns with clustering algorithms using a small dataset. The literature studies show examples that both algorithms have been used successfully.

    Conclusions: It's possible to see patterns using DBSCAN and K-means on a small dataset. The size of the dataset is not necessarily the only aspect to take into consideration, feature and parameter selection are both important as well since the algorithms need to be tuned and customized to the data.

  • 147.
    Fotrousi, Farnaz
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Quality-Impact Assessment of Software Systems2016In: Proceedings - 2016 IEEE 24th International Requirements Engineering Conference, RE 2016, Institute of Electrical and Electronics Engineers (IEEE), 2016, p. 427-431, article id 7765560Conference paper (Refereed)
    Abstract [en]

    Runtime monitoring and assessment of software products, features, and requirements allow product managers and requirement engineers to verify the implemented features or requirements, and validate the user acceptance. Gaining insight into software quality and impact of the quality on user facilitates interpretation of quality against users' acceptance and vice versa. The insight also expedites root cause analysis and fast evolution in the case of threatening the health and sustainability of the software. Several studies have proposed automated monitoring solutions and assessment, however, none of the studies introduces a solution for a joint assessment of software quality and quality impact on users. In this research, we study the relation between software quality and the impact of quality on Quality of Experience (QoE) of users to support the assessment of software products, features, and requirements. We propose a Quality-Impact assessment method based on a joint analysis of software quality and user feedback. As an application of the proposed method in requirement engineering, the joint analysis guides verification and validation of functional and quality requirements as well as capturing new requirements. The study follows a design science approach to design Quality-Impact method artifact. The Quality-Impact method has been already designed and validated in the first design cycle. However, next design cycles will contribute to clarify problems of the initial design, refine and validate the proposed method. This paper presents the concluded results and explains future studies for the follow up of the Ph.D. research.

  • 148.
    Fotrousi, Farnaz
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Fricker, Samuel
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    QoE probe: A requirement-monitoring tool2016In: CEUR Workshop Proceedings / [ed] Forbrig P.,Borg M.,Herrmann A.,Unterkalmsteiner M.,Bjarnason E.,Daun M.,Franch X.,Kirikova M.,Palomares C.,Espana S.,Paech B.,Opdahl A.L.,Tenbergen B.,Dieste O.,Felderer M.,Gay G.,Horkoff J.,Seffah A.,Morandini M.,Petersen K., CEUR-WS , 2016, Vol. 1564Conference paper (Refereed)
    Abstract [en]

    Runtime requirement monitoring is used for verification and validation of implemented requirements. To monitor the requirements in runtime; we propose a "QoE probe" tool, a mobile application integrated through an API, to collect usage logs as well as users’ Quality of Experience (QoE) in the form of user feedback. The analysis of the collected data guides requirement monitoring of functional and non-functional requirements as well as capturing new requirements.

  • 149.
    Fotrousi, Farnaz
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Fricker, Samuel
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Software analytics for planning product evolution2016In: Lecture Notes in Business Information Processing, Springer, 2016, Vol. 240, p. 16-31Conference paper (Refereed)
    Abstract [en]

    Evolution of a software product is inevitable as product context changes and the product gradually becomes less useful if it is not adapted. Planning is a basis to evolve a software product. The product manager, who carries responsibilities of planning, requires but does not always have access to high-quality information for making the best possible planning decisions. The current study aims to understand whether and when analytics are valuable for product planning and how they can be interpreted to a software product plan. The study was designed with an interview-based survey methodology approach through 17 in-depth semi-structured interviews with product managers. Based on results from qualitative analysis of the interviews, we defined an analytics-based model. The model shows that analytics have potentials to support the interpretation of product goals while is constrained by both product characteristics and product goals. The model implies how to use analytics for a good support of product planning evolution.

  • 150.
    Fotrousi, Farnaz
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Communication Systems.
    Fricker, Samuel
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Fiedler, Markus
    Blekinge Institute of Technology, Faculty of Computing, Department of Communication Systems.
    Quality Requirements Elicitation based on Inquiry of Quality-Impact Relationships2014In: Proceedings of International Requirements Engineering, IEEE , 2014, p. 303-312Conference paper (Refereed)
    Abstract [en]

    Quality requirements, an important class of non functional requirements, are inherently difficult to elicit. Particularly challenging is the definition of good-enough quality. The problem cannot be avoided though, because hitting the right quality level is critical. Too little quality leads to churn for the software product. Excessive quality generates unnecessary cost and drains the resources of the operating platform. To address this problem, we propose to elicit the specific relationships between software quality levels and their impacts for given quality attributes and stakeholders. An understanding of each such relationship can then be used to specify the right level of quality by deciding about acceptable impacts. The quality-impact relationships can be used to design and dimension a software system appropriately and, in a second step, to develop service level agreements that allow re-use of the obtained knowledge of good-enough quality. This paper describes an approach to elicit such quality-impact relationships and to use them for specifying quality requirements. The approach has been applied with user representatives in requirements workshops and used for determining Quality of Service (QoS) requirements based the involved users’ Quality of Experience (QoE). The paper describes the approach in detail and reports early experiences from applying the approach. Index Terms-Requirement elicitation, quality attributes, non-functional requirements, quality of experience (QoE), quality of service (QoS).

1234567 101 - 150 of 534
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf