Endre søk
Begrens søket
1234 51 - 100 of 170
RefereraExporteraLink til resultatlisten
Permanent link
Referera
Referensformat
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Annet format
Fler format
Språk
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Annet språk
Fler språk
Utmatningsformat
  • html
  • text
  • asciidoc
  • rtf
Treff pr side
  • 5
  • 10
  • 20
  • 50
  • 100
  • 250
Sortering
  • Standard (Relevans)
  • Forfatter A-Ø
  • Forfatter Ø-A
  • Tittel A-Ø
  • Tittel Ø-A
  • Type publikasjon A-Ø
  • Type publikasjon Ø-A
  • Eldste først
  • Nyeste først
  • Skapad (Eldste først)
  • Skapad (Nyeste først)
  • Senast uppdaterad (Eldste først)
  • Senast uppdaterad (Nyeste først)
  • Disputationsdatum (tidligste først)
  • Disputationsdatum (siste først)
  • Standard (Relevans)
  • Forfatter A-Ø
  • Forfatter Ø-A
  • Tittel A-Ø
  • Tittel Ø-A
  • Type publikasjon A-Ø
  • Type publikasjon Ø-A
  • Eldste først
  • Nyeste først
  • Skapad (Eldste først)
  • Skapad (Nyeste først)
  • Senast uppdaterad (Eldste først)
  • Senast uppdaterad (Nyeste først)
  • Disputationsdatum (tidligste først)
  • Disputationsdatum (siste først)
Merk
Maxantalet träffar du kan exportera från sökgränssnittet är 250. Vid större uttag använd dig av utsökningar.
  • 51. Henningsson, Kennet
    et al.
    Wohlin, Claes
    Assuring Fault Classification Agreement – An Empirical Evaluation2003Konferansepaper (Fagfellevurdert)
  • 52. Henningsson, Kennet
    et al.
    Wohlin, Claes
    Monitoring Fault Classification Agreement in an Industrial Context2005Konferansepaper (Fagfellevurdert)
    Abstract [en]

    Based on prior investigations and the request from a collaborative research partner, UIQ Technology, an investigation to develop an improved and more informative fault classification scheme was launched. The study investigates the level of agreement, a prerequisite for using a fault classification, between classifiers in an industrial setting. The method used is an experimental approach performed in an industrial setting for determining the agreement among classifiers facilitating for example Kappa statistics for determining the agreement. From the study it is concluded that the agreement within the industrial setting is higher than obtained in a previous study within an academic setting, but it is still in need of improvement. This leads to the conclusion that the experience within industry as well as the improved information structure in relation to the previous study aids agreement, but to reach a higher level of agreement, additional education is believed to be needed at the company.

  • 53. Henningsson, Kennet
    et al.
    Wohlin, Claes
    Risk-based Trade-off between Verification and Validation: An Industry-motivated Study2005Konferansepaper (Fagfellevurdert)
    Abstract [en]

    and C. Wohlin, "", (PROFES05), , Oulu, Finland, 2005.

  • 54. Henningsson, Kennet
    et al.
    Wohlin, Claes
    Understanding the Relations between Software Quality Attributes: A Survey Approach.2002Konferansepaper (Fagfellevurdert)
  • 55. Hu, Ganglan
    et al.
    Aurum, Aybüke
    Wohlin, Claes
    Adding Value to Software Requirements? An Empirical Study in the Chinese Software Industry2006Konferansepaper (Fagfellevurdert)
  • 56. Höst, Martin
    et al.
    Wohlin, Claes
    Thelin, Thomas
    Experimental Context Classification: Incentives and Experience of Subjects2005Konferansepaper (Fagfellevurdert)
    Abstract [en]

    There is a need to identify factors that affect the result of empirical studies in software engineering research. It is still the case that seemingly identical replications of controlled experiments result in different conclusions due to the fact that all factors describing the experiment context are not clearly defined and hence controlled. In this article, a scheme for describing the participants of controlled experiments is proposed and evaluated. It consists of two main factors, the incentives for participants in the experiment and the experience of the participants. The scheme has been evaluated by classifying a set of previously conducted experiments from literature. It can be concluded that the scheme was easy to use and understand. It is also found that experiments that are classified in the same way to a large extent point at the same results, which indicates that the scheme addresses relevant factors.

  • 57.
    Jabangwe, Ronald
    et al.
    Blekinge Tekniska Högskola, Sektionen för datavetenskap och kommunikation.
    Börstler, Jürgen
    Blekinge Tekniska Högskola, Sektionen för datavetenskap och kommunikation.
    Šmite, Darja
    Wohlin, Claes
    Blekinge Tekniska Högskola, Sektionen för datavetenskap och kommunikation.
    Empirical Evidence on the Link between Object-Oriented Measures and External Quality Attributes: A Systematic Literature Review2015Inngår i: Journal of Empirical Software Engineering, ISSN 1382-3256, E-ISSN 1573-7616, Vol. 20, nr 3, s. 640-693Artikkel i tidsskrift (Fagfellevurdert)
    Abstract [en]

    There is a plethora of studies investigating object-oriented measures and their link with external quality attributes, but usefulness of the measures may differ across empirical studies. This study aims to aggregate and identify useful object-oriented measures, specifically those obtainable from the source code of object-oriented systems that have gone through such empirical evaluation. By conducting a systematic literature review, 99 primary studies were identified and traced to four external quality attributes: reliability, maintainability, effectiveness and functionality. A vote-counting approach was used to investigate the link be- tween object-oriented measures and the attributes, and to also assess the consistency of the relation reported across empirical studies. Most of the studies investigate links between object-oriented measures and proxies for reliability attributes, followed by proxies for maintainability. The least investigated attributes were: effectiveness and functionality. Measures from the C&K measurement suite were the most popular across studies. Vote-counting results suggest that complexity, cohesion, size and coupling measures have a better link with reliability and maintainability than inheritance measures. However, inheritance measures should not be overlooked during quality assessment initiatives; their link with reliability and maintainability could be context dependent. There were too few studies traced to effectiveness and functionality attributes; thus a meaningful vote-counting analysis could not be conducted for these attributes. Thus, there is a need for diversification of quality attributes investigated in empirical studies. This would help with identifying useful measures during quality assessment initiatives, and not just for reliability and maintainability aspects.

  • 58.
    Jabangwe, Ronald
    et al.
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för programvaruteknik.
    Wohlin, Claes
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för programvaruteknik.
    Petersen, Kai
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för programvaruteknik.
    Šmite, Darja
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för programvaruteknik.
    Börstler, Jürgen
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för programvaruteknik.
    A method for investigating the quality of evolving object-oriented software using defects in global software development projects2016Inngår i: Journal of Software: Evolution and Process, ISSN 2047-7473, E-ISSN 2047-7481, Vol. 28, nr 8, s. 622-641Artikkel i tidsskrift (Fagfellevurdert)
    Abstract [en]

    Context: Global software development (GSD) projects can have distributed teams that work independently in different locations or team members that are dispersed. The various development settings in GSD can influence quality during product evolution. When evaluating quality using defects as a proxy, the development settings have to be taken into consideration. Objective: The aim is to provide a systematic method for supporting investigations of the implication of GSD contexts on defect data as a proxy for quality. Method: A method engineering approach was used to incrementally develop the proposed method. This was done through applying the method in multiple industrial contexts and then using lessons learned to refine and improve the method after application. Results: A measurement instrument and visualization was proposed incorporating an understanding of the release history and understanding of GSD contexts. Conclusion: The method can help with making accurate inferences about development settings because it includes details on collecting and aggregating data at a level that matches the development setting in a GSD context and involves practitioners at various phases of the investigation. Finally, the information that is produced from following the method can help practitioners make informed decisions when planning to develop software in comparable circumstances. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  • 59. Jalali, Samireh
    et al.
    Wohlin, Claes
    Agile Practices in Global Software Engineering: A Systematic Map2010Konferansepaper (Fagfellevurdert)
    Abstract [en]

    This paper presents the results of systematically reviewing the current research literature on the use of agile practices and lean software development in global software engineering (GSE). The primary purpose is to highlight under which circumstances they have been applied efficiently. Some common terms related to agile practices (e.g. scrum, extreme programming) were considered in formulating the search strings, along with a number of alternatives for GSE such as offshoring, outsourcing, and virtual teams. The results were limited to peer-reviewed conference papers/journal articles, published between 1999 and 2009. The synthesis was made through classifying the papers into different categories (e.g. research type, distribution). The analysis revealed that in most cases agile practices were modified with respect to the context and situational requirements. This indicates the need for future research on how to integrate all experiences and practices in a way to assist practitioners when setting up non-collocated agile projects.

    Fulltekst (pdf)
    FULLTEXT01
  • 60.
    Jalali, Samireh
    et al.
    Blekinge Tekniska Högskola, Sektionen för datavetenskap och kommunikation.
    Wohlin, Claes
    Blekinge Tekniska Högskola, Sektionen för datavetenskap och kommunikation.
    Global software engineering and agile practices: a systematic review2012Inngår i: Journal of Software Maintenance and Evolution: Research and Practice, ISSN 1532-060X, E-ISSN 1532-0618, Vol. 24, nr 6Artikkel i tidsskrift (Fagfellevurdert)
    Abstract [en]

    Agile practices have received attention from industry as an alternative to plan-driven software development approaches. Agile encourages, for example, small self-organized collocated teams, whereas global software engineering (GSE) implies distribution across cultural, temporal, and geographical boundaries. Hence, combining them is a challenge. A systematic review was conducted to capture the status of combining agility with GSE. The results were limited to peer-reviewed conference papers or journal articles, published between 1999 and 2009. The synthesis was made through classifying the papers into different categories (e.g. publication year, contribution type, research method). At the end, 81 papers were judged as primary for further analysis. The distribution of papers over the years indicated that GSE and Agile in combination has received more attention in the last 5 years. However, the majority of the existing research is industrial experience reports in which Agile practices were modified with respect to the context and situational requirements. The emergent need in this research area is suggested to be developing a framework that considers various factors from different perspectives when incorporating Agile in GSE. Practitioners may use it as a decision-making basis in early phases of software development.

    Fulltekst (pdf)
    fulltext
  • 61.
    Jalali, Samireh
    et al.
    Blekinge Tekniska Högskola, Sektionen för datavetenskap och kommunikation.
    Wohlin, Claes
    Blekinge Tekniska Högskola, Sektionen för datavetenskap och kommunikation.
    Systematic literature studies: Database searches vs. backward snowballing2012Konferansepaper (Fagfellevurdert)
    Abstract [en]

    Systematic studies of the literature can be done in different ways. In particular, different guidelines propose different first steps in their recommendations, e.g. start with search strings in different databases or start with the reference lists of a starting set of papers. In software engineering, the main recommended first step is using search strings in a number of databases, while in information systems, snowballing has been recommended as the first step. This paper compares the two different search approaches for conducting literature review studies. The comparison is conducted by searching for articles addressing "Agile practices in global software engineering". The focus of the paper is on evaluating the two different search approaches. Despite the differences in the included papers, the conclusions and the patterns found in both studies are quite similar. The strengths and weaknesses of each first step are discussed separately and in comparison with each other. It is concluded that none of the first steps is outperforming the other, and the choice of guideline to follow, and hence the first step, may be context-specific, i.e. depending on the area of study.

    Fulltekst (pdf)
    FULLTEXT01
  • 62.
    Jalali, Samireh
    et al.
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för programvaruteknik.
    Wohlin, Claes
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för programvaruteknik.
    Angelis, Lefteris
    Investigating the Applicability of Agility Assessment Surveys: A Case Study2014Inngår i: Journal of Systems and Software, ISSN 0164-1212 , Vol. 98, s. 172-190Artikkel i tidsskrift (Fagfellevurdert)
    Abstract [en]

    Context: Agile software development has become popular in the past decade despite that it is not a particularly well-defined concept. The general principles in the Agile Manifesto can be instantiated in many different ways, and hence the perception of Agility may differ quite a lot. This has resulted in several conceptual frameworks being presented in the research literature to evaluate the level of Agility. However, the evidence of actual use in practice of these frameworks is limited. Objective: The objective in this paper is to identify online surveys that can be used to evaluate the level of Agility in practice, and to evaluate the surveys in an industrial setting. Method: Surveys for evaluating Agility were identified by systematically searching the web. Based on an exploration of the surveys found, two surveys were identified as most promising for our objective. The two surveys selected were evaluated in a case study with three Agile teams in a software consultancy company. The case study included a self-assessment of the Agility level by using the two surveys, interviews with the Scrum master and a team representative, interviews with the customers of the teams and a focus group meeting for each team. Results: The perception of team Agility was judged by each of the teams and their respective customer, and the outcome was compared with the results from the two surveys. Agility profiles were created based on the surveys. Conclusions: It is concluded that different surveys may very well judge Agility differently, which support the viewpoint that it is not a well-defined concept. The researchers and practitioners agreed that one of the surveys, at least in this specific case, provided a better and more holistic assessment of the Agility of the teams in the case study.

  • 63. Jönsson, Per
    et al.
    Wohlin, Claes
    A Study on Prioritisation of Impact Analysis Issues: A Comparison between Perspectives2005Konferansepaper (Fagfellevurdert)
    Abstract [en]

    Impact analysis, which concerns the analysis of the impact of proposed changes to a system, is an important change management activity that previously has been studied mostly with respect to technical aspects. In this paper, we present results from a study where issues with impact analysis were prioritised with respect to criticality by professional software developers from an organisational perspective and a self-perspective. We visualise the prioritisation in a way that allows us to identify priority classes of issues and to discuss differences between the perspectives. Furthermore, we look at issue characteristics that relate to said differences, and identify a number of improvements that could help mitigate the issues. We conclude that looking at multiple perspectives is rewarding and entails certain benefits when dealing with software process improvement, but also that the prioritisation and visualisation approach seems to be good for optimising software process improvement efforts in general.

  • 64. Jönsson, Per
    et al.
    Wohlin, Claes
    An Evaluation of k-nearest Neighbour Imputation Using Likert Data2004Konferansepaper (Fagfellevurdert)
    Abstract [en]

    Studies in many different fields of research suffer from the problem of missing data. With missing data, statistical tests will lose power, results may be biased, or analysis may not be feasible at all. There are several ways to handle the problem, for example through imputation. With imputation, missing values are replaced with estimated values according to an imputation method or model. In the k- Nearest Neighbour (k-NN) method, a case is imputed using values from the k most similar cases. In this paper, we present an evaluation of the k-NN method using Likert data in a software engineering context. We simulate the method with different values of k and for different percentages of missing data. Our findings indicate that it is feasible to use the k-NN method with Likert data. We suggest that a suitable value of k is approximately the square root of the number of complete cases. We also show that by relaxing the method rules with respect to selecting neighbours, the ability of the method remains high for large amounts of missing data without affecting the quality of the imputation.

  • 65. Jönsson, Per
    et al.
    Wohlin, Claes
    Benchmarking k-Nearest Neighbour Imputation with Homogeneous Likert Data2006Inngår i: Empirical Software Engineering, ISSN 1382-3256 , Vol. 11, nr 3, s. 463-489Artikkel i tidsskrift (Fagfellevurdert)
  • 66. Jönsson, Per
    et al.
    Wohlin, Claes
    Understanding Impact Analysis: An Empirical Study to Capture Knowledge on Different Organisational Levels2005Konferansepaper (Fagfellevurdert)
  • 67. Jönsson, Per
    et al.
    Wohlin, Claes
    Understanding the Importance of Roles in Architecture Related Process Improvement: A Case Study2004Konferansepaper (Fagfellevurdert)
  • 68. Jönsson, Per
    et al.
    Wohlin, Claes
    Understanding the Importance of Roles in Architecture-Related Process Improvement: A Case Study2005Konferansepaper (Fagfellevurdert)
    Abstract [en]

    In response to the increasingly challenging task of developing software, many companies turn to Software Process Improvement (SPI). One of many factors that SPI depends on is user (staff) involvement, which is complicated by the fact that process users may differ in viewpoints and priorities. In this paper, we present a case study in which we performed a pre- SPI examination of process users’ viewpoints and priorities with respect to their roles. The study was conducted by the means of a questionnaire sent out to the process users. The analysis reveals differences among roles regarding priorities, in particular for product managers and designers, but not regarding viewpoints. This indicates that further research should investigate in which situations roles are likely to differ and in which they are likely to be similar. Moreover, since we initially expected both viewpoints and priorities to differ, it indicates that it is important to cover these aspects in SPI, and not only rely on expectations.

  • 69. Jönsson, Per
    et al.
    Wohlin, Claes
    Using Checklists to Support the Change Control Process: A Case Study2006Konferansepaper (Fagfellevurdert)
  • 70. Karlsson, Lena
    et al.
    Berander, Patrik
    Regnell, Björn
    Wohlin, Claes
    Requirements Prioritisation: An Experiment on Exhaustive Pair-Wise Comparisons versus Planning Game Partitioning2004Konferansepaper (Fagfellevurdert)
    Abstract [en]

    The process of selecting the right set of requirements for a product release is highly dependent on how well we succeed in prioritising the requirements candidates. There are different techniques available for requirements prioritisation, some more elaborate than others. In order to compare different techniques, a controlled experiment was conducted with the objective of understanding differences regarding time consumption, ease of use, and accuracy. The requirements prioritisation techniques compared in the experiment are the Analytical Hierarchy Process (AHP) and a variation of the Planning Game (PG), isolated from Extreme Programming. The subjects were 15 Ph.D. students and one professor, who prioritised mobile phone features using both methods. It was found that the straightforward and intuitive PG was less time consuming, and considered by the subjects as easier to use, and more accurate than AHP.

  • 71. Karlsson, Lena
    et al.
    Berander, Patrik
    Regnell, Björn
    Wohlin, Claes
    Simple Is Better?: An Experiment on Requirements Prioritisation2003Konferansepaper (Fagfellevurdert)
    Abstract [en]

    The process of selecting the right set of requirements for a product release is highly dependent on how well we succeed in prioritising the requirements candidates. There are different techniques available for requirements prioritisation, some more elaborate than others. In order to compare different techniques, a controlled experiment was conducted with the objective of understanding differences regarding time consumption, ease of use, and accuracy. The requirements prioritisation techniques compared in the experiment are the Analytical Hierarchy Process (AHP) and a variation of the Planning Game (PG), isolated from Extreme Programming. The subjects were 15 Ph.D. students and one professor, who prioritised mobile phone features using both methods. It was found that the straightforward and intuitive PG was less time consuming, and considered by the subjects as easier to use, and more accurate than AHP.

  • 72. Karlsson, Lena
    et al.
    Thelin, Thomas
    Regnell, Björn
    Berander, Patrik
    Wohlin, Claes
    Pair-Wise Comparisons versus Planning Game Partitioning – Experiments on Requirements Prioritisation Techniques2007Inngår i: Empirical Software Engineering, ISSN 1382-3256 , Vol. 12, nr 1, s. 3-33Artikkel i tidsskrift (Fagfellevurdert)
    Abstract [en]

    The process of selecting the right set of requirements for a product release is dependent on how well the organisation succeeds in prioritising the requirements candidates. This paper describes two consecutive controlled experiments comparing different requirements prioritisation techniques with the objective of understanding differences in time-consumption, ease of use and accuracy. The first experiment evaluates Pair-wise comparisons and a variation of the Planning game. As the Planning game turned out as superior, the second experiment was designed to compare the Planning game to Tool-supported pair-wise comparisons. The results indicate that the manual pair-wise comparisons is the most time-consuming of the techniques, and also the least easy to use. Tool-supported pair-wise comparisons is the fastest technique and it is as easy to use as the Planning game. The techniques do not differ significantly regarding accuracy.

  • 73. Karlström, Daniel
    et al.
    Runeson, Per
    Wohlin, Claes
    Aggregating Viewpoints for Strategic Software Process Improvement: A Method and a Case Study.2002Inngår i: IEE Proceedings - Software, ISSN 1462-5970, E-ISSN 1463-9831, Vol. 149, nr 5, s. 143-152Artikkel i tidsskrift (Fagfellevurdert)
    Abstract [en]

    Decisions regarding strategic software process improvement (SPI) are generally based on the management's viewpoint of the situation, and in some cases also the viewpoints of some kind of an SPI group. This may result in strategies which are not accepted throughout the organisation, as the views of how the process is functioning are different throughout the company. A method for identifying the major factors affecting a process-improvement goal and how the perception of the importance of the factors varies throughout the organisation are described The method lets individuals from the whole development organisation rate the expected effect of these factors from their own viewpoint. In this way the strategic SPI decision can be taken using input from the entire organisation, and any discrepancies in the ratings can also give important SPI-decision information. The method is applied to a case study performed at Fuji Xerox, Tokyo. In the case study, significantly different profiles of the factor ratings came from management compared with those from the engineering staff. This result can be used to support the strategy decision as such, but also to anchor the decision in the organisation.

  • 74. Karlström, Daniel
    et al.
    Runeson, Per
    Wohlin, Claes
    Aggregating Viewpoints for Strategic Software Process Improvement: A Method and a Case Study.2002Konferansepaper (Fagfellevurdert)
  • 75. Kuzniarz, Ludwik
    et al.
    Staron, Miroslaw
    Wohlin, Claes
    An Empirical Study on Using Stereotypes to Improve Understanding of UML Models2004Konferansepaper (Fagfellevurdert)
    Abstract [en]

    Stereotypes were introduced into the Unified Modeling Language (UML) to provide means of customizing this visual, general purpose, object-oriented modeling language, for its usage in specific application domains. The primary purpose of stereotypes is to brand an existing model element with a specific semantics. In addition, stereotypes can also be used as notational shorthand. The paper elaborates on this role of stereotypes from the perspective of UML, clarifies the role and describes a controlled experiment aimed at evaluation of the role – in the context of model understanding. The results of the experiment support the claim that stereotypes with graphical icons for their representation play a significant role in comprehension of models and show the size of the improvement.

  • 76. Kuzniarz, Ludwik
    et al.
    Staron, Miroslaw
    Wohlin, Claes
    Students as Study Subjects in Software Engineering experimentation2003Konferansepaper (Fagfellevurdert)
  • 77.
    Mendes, Emilia
    et al.
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datavetenskap.
    Felizardo, Katia
    Universidade Tecnologica Federal do Parana, BRA.
    Wohlin, Claes
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för programvaruteknik.
    Kalinowski, M.
    Pontificia Universidade Catolica do Rio de Janeiro, BRA.
    Search Strategy to Update Systematic Literature Reviews in Software Engineering2019Inngår i: Proceedings - 45th Euromicro Conference on Software Engineering and Advanced Applications, SEAA 2019, Institute of Electrical and Electronics Engineers Inc. , 2019, s. 355-362Konferansepaper (Fagfellevurdert)
    Abstract [en]

    [Context] Systematic Literature Reviews (SLRs) have been adopted within the Software Engineering (SE) domain for more than a decade to provide meaningful summaries of evidence on several topics. Many of these SLRs are now outdated, and there are no standard proposals on how to update SLRs in SE. [Objective] The goal of this paper is to provide recommendations on how to best to search for evidence when updating SLRs in SE. [Method] To achieve our goal, we compare and discuss outcomes from applying different search strategies to identifying primary studies in a previously published SLR update on effort estimation. [Results] The use of a single iteration forward snowballing with Google Scholar, and employing the original SLR and its primary studies as a seed set seems to be the most cost-effective way to search for new evidence when updating SLRs. [Conclusions] The recommendations can be used to support decisions on how to update SLRs in SE. © 2019 IEEE.

  • 78. Milicic, Drazen
    et al.
    Wohlin, Claes
    Distribution Patterns of Effort Estimations2004Konferansepaper (Fagfellevurdert)
    Abstract [en]

    Effort estimations within software development projects and the ability to work within these estimations are perhaps the single most important, and at the same time inadequately mastered, discipline for overall project success. This study examines some characteristics of accuracies in software development efforts and identifies patterns that can be used to increase the understanding of the effort estimation discipline as well as to improve the accuracy of effort estimations. The study complements current research by taking a more simplistic approach than usually found within mainstream research concerning effort estimations. It shows that there are useful patterns to be found as well as interesting causalities, usable to increase the understanding and effort estimation capability.

  • 79. Moe, NilsBrede
    et al.
    Barney, Sebastian
    Blekinge Tekniska Högskola, Sektionen för datavetenskap och kommunikation.
    Aurum, Aybüe
    Khurum, Mahvish
    Blekinge Tekniska Högskola, Sektionen för datavetenskap och kommunikation.
    Wohlin, Claes
    Blekinge Tekniska Högskola, Sektionen för datavetenskap och kommunikation.
    Barney, Hamish
    Gorschek, Tony
    Blekinge Tekniska Högskola, Sektionen för datavetenskap och kommunikation.
    Winata, Martha
    Fostering and sustaining innovation in a Fast Growing Agile Company2012Inngår i: Lecture Notes in Computer Science, Madrid: Springer , 2012, Vol. 7343, s. 160-174Konferansepaper (Fagfellevurdert)
    Abstract [en]

    Sustaining innovation in a fast growing software development company is difficult. As organisations grow, peoples' focus often changes from the big picture of the product being developed to the specific role they fill. This paper presents two complementary approaches that were successfully used to support continued developer-driven innovation in a rapidly growing Australian agile software development company. The method "FedEx TM Day" gives developers one day to showcase a proof of concept they believe should be part of the product, while the method "20% Time" allows more ambitious projects to be undertaken. Given the right setting and management support, the two approaches can support and improve bottom-up innovation in organizations.

    Fulltekst (pdf)
    fulltext
  • 80.
    Mourao, Erica
    et al.
    Fluminense Fed Univ, BRA.
    Kalinowski, Marcos
    Pontifical Catholic Univ Rio de Janeiro PUC Rio, BRA.
    Murta, Leonardo
    Fluminense Fed Univ, BRA.
    Mendes, Emilia
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik. Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för programvaruteknik.
    Wohlin, Claes
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för programvaruteknik. Blekinge Inst Technol, Karlskrona, Sweden..
    Investigating the Use of a Hybrid Search Strategy for Systematic Reviews2017Inngår i: 11TH ACM/IEEE INTERNATIONAL SYMPOSIUM ON EMPIRICAL SOFTWARE ENGINEERING AND MEASUREMENT (ESEM 2017), IEEE , 2017, s. 193-198Konferansepaper (Fagfellevurdert)
    Abstract [en]

    [Background] Systematic Literature Reviews (SLRs) are one of the important pillars when employing an evidence-based paradigm in Software Engineering. To date most SLRs have been conducted using a search strategy involving several digital libraries. However, significant issues have been reported for digital libraries and applying such search strategy requires substantial effort. On the other hand, snowballing has recently arisen as a potentially more efficient alternative or complementary solution. Nevertheless, it requires a relevant seed set of papers. [Aims] This paper proposes and evaluates a hybrid search strategy combining searching in a specific digital library (Scopus) with backward and forward snowballing. [Method] The proposed hybrid strategy was applied to two previously published SLRs that adopted database searches. We investigate whether it is able to retrieve the same included papers with lower effort in terms of the number of analysed papers. The two selected SLRs relate respectively to elicitation techniques (not confined to Software Engineering (SE)) and to a specific SE topic on cost estimation. [Results] Our results provide preliminary support for the proposed hybrid search strategy as being suitable for SLRs investigating a specific research topic within the SE domain. Furthermore, it helps overcoming existing issues with using digital libraries in SE. [Conclusions] The hybrid search strategy provides competitive results, similar to using several digital libraries. However, further investigation is needed to evaluate the hybrid search strategy.

  • 81.
    Mourão, Erica
    et al.
    Fluminense Federal University, BRA.
    Pimentel, João Felipe N.
    Fluminense Federal University, BRA.
    Murta, Leonardo Gresta Paulino
    Fluminense Federal University, BRA.
    Kalinowski, Marcos
    Pontifical Catholic University of Rio de Janeiro (PUC-Rio), BRA.
    Mendes, Emilia
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för programvaruteknik. Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datavetenskap.
    Wohlin, Claes
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för programvaruteknik.
    On the performance of hybrid search strategies for systematic literature reviews in software engineering2020Inngår i: Information and Software Technology, ISSN 0950-5849, E-ISSN 1873-6025, Vol. 123, artikkel-id 106294Artikkel i tidsskrift (Fagfellevurdert)
    Abstract [en]

    Context: When conducting a Systematic Literature Review (SLR), researchers usually face the challenge of designing a search strategy that appropriately balances result quality and review effort. Using digital library (or database) searches or snowballing alone may not be enough to achieve high-quality results. On the other hand, using both digital library searches and snowballing together may increase the overall review effort. Objective: The goal of this research is to propose and evaluate hybrid search strategies that selectively combine database searches with snowballing. Method: We propose four hybrid search strategies combining database searches in digital libraries with iterative, parallel, or sequential backward and forward snowballing. We simulated the strategies over three existing SLRs in SE that adopted both database searches and snowballing. We compared the outcome of digital library searches, snowballing, and hybrid strategies using precision, recall, and F-measure to investigate the performance of each strategy. Results: Our results show that, for the analyzed SLRs, combining database searches from the Scopus digital library with parallel or sequential snowballing achieved the most appropriate balance of precision and recall. Conclusion: We put forward that, depending on the goals of the SLR and the available resources, using a hybrid search strategy involving a representative digital library and parallel or sequential snowballing tends to represent an appropriate alternative to be used when searching for evidence in SLRs. © 2020 Elsevier B.V.

  • 82. Ohlsson, MC
    et al.
    Andrews, Anneliese Amschler
    Wohlin, Claes
    Modelling fault-proneness statistically over a sequence of releases: a case study2001Inngår i: Journal of Software Maintenance and Evolution: Research and Practice, ISSN 1532-060X, E-ISSN 1532-0618, s. 167-199Artikkel i tidsskrift (Fagfellevurdert)
    Abstract [en]

    Many of today's software systems evolve through a series of releases that add new functionality and features, in addition to the results of corrective maintenance. As the systems evolve over time it is necessary to keep track of and manage their problematic components. Our focus is to track system evolution and to react before the systems become difficult to maintain. To do the tracking, we use a method based on a selection of statistical techniques. In the case study we report here that had historical data available primarily on corrective maintenance, we apply the method to four releases of a system consisting of 130 components. In each release, components are classified as fault-prone if the number of defect reports written against them are above a certain threshold. The outcome from the case study shows stabilizing principal components over the releases, and classification trees with lower thresholds in their decision nodes. Also, the variables used in the classification trees' decision nodes are related to changes in the same files. The discriminant functions use more variables than the classification trees and are more difficult to interpret. Box plots highlight the findings from the other analyses. The results show that for a context of corrective maintenance, principal components analysis together with classification trees are good descriptors for tracking software evolution, Copyright (C) 2001 John Wiley gr Sons, Ltd.

  • 83. Petersen, Kai
    et al.
    Rönkkö, Kari
    Wohlin, Claes
    The impact of time controlled reading on software inspection effectiveness and efficiency: a controlled experiment2008Konferansepaper (Fagfellevurdert)
    Abstract [en]

    Reading techniques help to guide reviewers during individual software inspections. In this experiment, we completely transfer the principle of statistical usage testing to inspection reading techniques for the first time. Statistical usage testing relies on a usage profile to determine how intensively certain parts of the system shall be tested from the users' perspective. Usage-based reading applies statistical usage testing principles by utilizing prioritized use cases as a driver for inspecting software artifacts (e.g., design). In order to reflect how intensively certain use cases should be inspected, time budgets are introduced to usage-based reading where a maximum inspection time is assigned to each use case. High priority use cases receive more time than low priority use cases. A controlled experiment is conducted with 23 Software Engineering M.Sc. students inspecting a design document. In this experiment, usage-based reading without time budgets is compared with time controlled usage-based reading. The result of the experiment is that time budgets do not significantly improve inspection performance. In conclusion, it is sufficient to only use prioritized use cases to successfully transfer statistical usage testing to inspections.

  • 84. Petersen, Kai
    et al.
    Wohlin, Claes
    A comparison of issues and advantages in agile and incremental development between state of the art and an industrial case2009Inngår i: Journal of Systems and Software, ISSN 0164-1212 , Vol. 82, nr 9, s. 1479-1490Artikkel i tidsskrift (Fagfellevurdert)
    Abstract [en]

    Recent empirical studies have been conducted identifying a number of issues and advantages of incremental and agile methods. However, the majority of studies focused on one model (Extreme Programming) and small projects. To draw more general conclusions we conduct a case study in large-scale development identifying issues and advantages, and compare the results with previous empirical studies on the topic. The principle results are that (1) the case study and literature agree on the benefits while new issues arise when using agile in large-scale and (2) an empirical research framework is needed to 27 make agile studies comparable.

  • 85. Petersen, Kai
    et al.
    Wohlin, Claes
    Context in industrial software engineering research2009Konferansepaper (Fagfellevurdert)
    Abstract [en]

    In order to draw valid conclusions when aggregating evidence it is important to describe the context in which industrial studies were conducted. This paper structures the context for empirical industrial studies and provides a checklist. The aim is to aid researchers in making informed decisions concerning which parts of the context to include in the descriptions. Furthermore, descriptions of industrial studies were surveyed.

    Fulltekst (pdf)
    FULLTEXT01
  • 86. Petersen, Kai
    et al.
    Wohlin, Claes
    Issues and advantages of using agile and incremental practices2008Konferansepaper (Fagfellevurdert)
    Abstract [en]

    The importance of agile methods increased in recent years due to the need of being more flexible due to unstable requirements and high competitive pressure. Therefore, recent empirical studies have been conducted identifying a number of issues and advantages of incremental and agile methods. However, the majority of studies focused on one model (Extreme Programming) and small projects. Thus, in order to draw more general conclusions there is a need to also study large scale implementations of agile and incremental practices. Therefore, this paper 1) investigates a large-scale implementation of agile and incremental practices and identified issues and advantages and 2) compares them with the findings of previous studies mainly focusing on small-scale agile implementations.

    Fulltekst (pdf)
    FULLTEXT01
  • 87. Petersen, Kai
    et al.
    Wohlin, Claes
    Measuring the flow in lean software development2011Inngår i: Software, practice & experience, ISSN 0038-0644, E-ISSN 1097-024X, Vol. 41, nr 9, s. 975-996Artikkel i tidsskrift (Fagfellevurdert)
    Abstract [en]

    Responsiveness to customer needs is an important goal in agile and lean software development. One major aspect is to have a continuous and smooth flow that quickly delivers value to the customer. In this paper we apply cumulative flow diagrams to visualize the flow of lean software development. The main contribution is the definition of novel measures connected to the diagrams to achieve the following goals: (1) increase throughput and reduce lead-time to achieve high responsiveness to customers' needs and (2) to provide a tracking system that shows the progress/status of software product development. An evaluation of the measures in an industrial case study showed that practitioners found them useful and identify improvements based on the measurements, which were in line with lean and agile principles. Furthermore, the practitioners found the measures useful in seeing the progress of development for complex products where many tasks are executed in parallel. The measures are now an integral part of the improvement work at the studied company.

  • 88. Petersen, Kai
    et al.
    Wohlin, Claes
    Software Process Improvement through the Lean Measurement (SPI-LEAM) Method2010Inngår i: Journal of Systems and Software, ISSN 0164-1212 , Vol. 8, nr 7, s. 1275-1287Artikkel i tidsskrift (Fagfellevurdert)
    Abstract [en]

    Software process improvement methods help to continuously refine and adjust the software process to improve its performance (e.g., in terms of lead-time, quality of the software product, reduction of change requests, and so forth). Lean software development propagates two important principles that help process improvement, namely identification of waste in the process and considering interactions between the individual parts of the software process from an end-to-end perspective. A large shift of thinking about the own way of working is often required to adopt lean. One of the potential main sources of failure is to try to make a too large shift about the ways of working at once. Therefore, the change to lean has to be done in a continuous and incremental way. In response to this we propose a novel approach to bring together the quality improvement paradigm and lean software development practices, the approach being called Software Process Improvement through the Lean Measurement (SPI-LEAM) Method. The method allows to assess the performance of the development process and take continuous actions to arrive at a more lean software process over time. The method is under implementation in industry and an initial evaluation of the method has been performed.

    Fulltekst (pdf)
    FULLTEXT01
  • 89. Petersen, Kai
    et al.
    Wohlin, Claes
    The Effect of Moving from a Plan-Driven to an Incremental and Agile Development Approach: An Industrial Case Study2010Inngår i: Journal of Empirical Software Engineering, ISSN 1382-3256, E-ISSN 1573-7616, Vol. 15, nr 6, s. 654-693Artikkel i tidsskrift (Fagfellevurdert)
    Abstract [en]

    So far, only few in-depth studies focused on the direct comparison of process models in general, and between plan-driven and incremental/agile approaches in particular. That is, it is not made explicit what the effect is of moving from one model to another model. Furthermore, there is limited evidence on advantages and issues encountered in agile software development, this is specifically true in the context of large-scale development. The objective of the paper is to investigate how the perception of bottlenecks, unnecessary work, and rework (from hereon referred to as issues) changes when migrating from a plan-driven to an incremental software development approach with agile practices (flexible product backlog, face-to-face interaction, and frequent integration), and how commonly perceived these practices are across different systems and development roles. The context in which the objective should be achieved is large-scale development with a market-driven focus. The selection of the context was based on the observation in related work that mostly small software development projects were investigated and that the investigation was focused on one agile model (eXtreme programming). A case study was conducted at a development site of Ericsson AB, located in Sweden in the end of 2007. In total 33 interviews were conducted in order to investigate the perceived change when migrating from plan-driven to incremental and agile software development, the interviews being the primary source of evidence. For triangulation purposes measurements collected by Ericsson were considered, the measurements relating to unnecessary work (amount of discarded requirements) and rework (data on testing efficiency and maintenance effort). Triangulation in this context means that the measurements were used to confirm the perceived changes with an additional data source. In total 64 issues were identified, 24 being of general nature and the remaining 40 being local and therefore unique to individual’s opinions or a specific system. The most common ones were documented and analyzed in detail. The commonality refers to how many persons in different roles and across the systems studied have mentioned the issues for each of the process models. The majority of the most common issues relates to plan-driven development. We also identified common issues remaining for agile after the migration, which were related to testing lead-time, test coverage, software release, and coordination overhead. Improvements were identified as many issues commonly raised for the plan-driven approach were not raised anymore for the incremental and agile approach. It is concluded that the recent introduction (start in 2005 with the study being conducted in the end of 2007) of incremental and agile practices brings added values in comparison to the plan-driven approach, which is evident from the absence of critical issues that are encountered in plan-driven development.

    Fulltekst (pdf)
    FULLTEXT01
  • 90. Petersen, Kai
    et al.
    Wohlin, Claes
    Baca, Dejan
    The waterfall model in large-scale development2009Konferansepaper (Fagfellevurdert)
    Abstract [en]

    Waterfall development is still a widely used way of working in software development companies. Many problems have been reported related to the model. Commonly accepted problems are for example to cope with change and that defects all too often are detected too late in the software development process. However, many of the problems mentioned in literature are based on beliefs and experiences, and not on empirical evidence. To address this research gap, we compare the problems in literature with the results of a case study at Ericsson AB in Sweden, investigating issues in the waterfall model. The case study aims at validating or contradicting the beliefs of what the problems are in waterfall development through empirical research.

    Fulltekst (pdf)
    FULLTEXT01
    Fulltekst (pdf)
    FULLTEXT02
  • 91. Petersson, Håkan
    et al.
    Thelin, Thomas
    Wohlin, Claes
    Capture-Recapture in Software Inspections after 10 Years Research: Theory, Evaluation and Application.2004Inngår i: Journal of Systems and Software, ISSN 0164-1212, E-ISSN 1873-1228, Vol. 72, nr 2, s. 249-264Artikkel i tidsskrift (Fagfellevurdert)
    Abstract [en]

    Software inspection is a method to detect faults in the early phases of the software life cycle. In order to estimate the number of faults not found, capture-recapture was introduced for software inspections in 1992 to estimate remaining faults after an inspection. Since then, several papers have been written in the area, concerning the basic theory, evaluation of models and application of the method. This paper summarizes the work made in capture-recapture for software inspections during these years. Furthermore, and more importantly, the contribution of the papers are classified as theory, evaluation or application, in order to analyse the performed research as well as to highlight the areas of research that need further work. It is concluded that (1) most of the basic theory is investigated within biostatistics, (2) most software engineering research is performed on evaluation, a majority ending up in recommendation of the Mh-JK model, and (3) there is a need for application experiences. In order to support the application, an inspection process is presented with decision points based on capture-recapture estimates.

  • 92. Rombach, Caroline D.
    et al.
    Kude, Oliver
    Aurum, Aybüke
    Jeffery, Ross
    Wohlin, Claes
    An Empirical Study of an ER-Model Inspection Meeting2003Konferansepaper (Fagfellevurdert)
    Abstract [en]

    A great benefit of software inspections is that they can beapplied at almost any stage of the software developmentlife cycle. This paper documents a large-scale experimentconducted during an Entity Relationship (ER) Modelinspection meeting. The experiment was aimed at findingempirically validated answers to the question "whichreading technique has a more efficient detection ratewhen searching for defects in an ER Model". Secondly,the effect of the usage of Roles in a team meeting was alsoexplored. Finally, this research investigated thereviewers' ability to find defects belonging to certaindefect categories. The findings showed that theparticipants using a checklist had a significantly higherdetection rate than the Ad Hoc groups. Overall, thegroups using Roles had a lower performance than thosewithout Roles. Furthermore, the findings showed thatwhen comparing the groups using Roles to those withoutRoles, the proportion of syntactic and semantic defectsfound in the number of overall defects identified did notsignificantly differ.

  • 93. Rovegård, Per
    et al.
    Angelis, Lefteris
    Wohlin, Claes
    An Empirical Study on Views of Importance of Change Impact Analysis Issues2008Inngår i: IEEE Transactions on Software Engineering, ISSN 0098-5589 , Vol. 34, nr 4, s. 513-530Artikkel i tidsskrift (Fagfellevurdert)
    Abstract [en]

    Change impact analysis (IA) is a change management activity that previously has been much studied from a technical perspective. For example, much work focuses on methods for determining the impact of a change. In this paper, we present results from a study on the role of IA in the change management process. In the study, IA issues were prioritized with respect to criticality by software professionals from an organizational perspective and a self-perspective. The software professionals belonged to three organizational levels: operative, tactical, and strategic. Qualitative and statistical analyses with respect to differences between perspectives and levels are presented. The results show that important issues for a particular level are tightly related to how the level is defined. Similarly, issues important from an organizational perspective are more holistic than those important from a self-perspective. However, our data indicate that the self-perspective colors the organizational perspective, meaning that personal opinions and attitudes cannot be easily disregarded. In comparing the perspectives and the levels, we visualize the differences in a way that allows us to discuss two classes of issues: high priority and medium priority. The most important issues from this point of view concern fundamental aspects of IA and its execution.

  • 94. Ruhe, Günther
    et al.
    Wohlin, ClaesBlekinge Tekniska Högskola, Sektionen för datavetenskap och kommunikation.
    Software Project Management in a Changing World2014Collection/Antologi (Annet vitenskapelig)
    Abstract [en]

    By bringing together various current direc­tions, Software Project Management in a Changing World focuses on how people and organizations can make their processes more change-adaptive. The selected chapters closely correspond to the project management knowledge areas introduced by the Project Management Body of Knowledge, including its extension for managing software projects. The contributions are grouped into four parts, preceded by a general introduction. Part I “Fundamentals” provides in-depth insights into fundamental topics including resource allocation, cost estimation, and risk management. Part II “Supporting Areas” presents recent experiences and results related to the management of quality systems, knowledge, product portfolios, and glob­al and virtual software teams. Part III “New Paradigms” details new and evolving software-development practices including agile, distributed, and open and inner-source development. Finally, Part IV “Emerging Techniques” introduces search-based tech­niques, social media, software process simulation and the efficient use of empirical data, and their effects on software-management practices. This book will attract readers from both academia and practice with its excellent balance between new findings and experience of their usage in new contexts. Whenever appropriate, the presentation is based on evidence from empirical evaluation of the proposed approaches. For researchers and graduate students, it presents some of the latest methods and techniques to accommodate new challenges facing the discipline. For professionals, it serves as a source of inspiration for refining their project-management skills in new areas.

  • 95. Scott, Hanna
    et al.
    Wohlin, Claes
    Capture-recapture in Software Unit Testing: A Case Study2008Konferansepaper (Fagfellevurdert)
    Abstract [en]

    Quantitative failure estimates for software systems are traditionally made at end of testing using software reliability growth modeling. A persistent problem with most kinds of failure estimation methods and models is the dependency on historical data. This paper presents a method for estimating the total amount of failures possible to provoke from a unit, without historical data dependency. The method combines the results from having several developers testing the same unit with capture-recapture models to create an estimate of “remaining” number of failures. The evaluation of the approach consists of two steps: first a pre-study where the tools and methods are tested in a large open source project, followed by an add-on to a project at a medium sized software company. The evaluation was a success. An estimate was created, and it can be used both as a quality gatekeeper for units and input to functional and system testing.

  • 96. Staron, Miroslaw
    et al.
    Kuzniarz, Ludwik
    Wohlin, Claes
    An Industrial Replication of Empirical Evaluation of Using Stereotypes to Improve Comprehension of UML Models2004Konferansepaper (Fagfellevurdert)
  • 97. Staron, Miroslaw
    et al.
    Kuzniarz, Ludwik
    Wohlin, Claes
    Empirical Assessment of Using Stereotypes to Improve Comprehension of UML Models: A Set of Experiments2006Inngår i: Journal of Systems and Software, ISSN 0164-1212 , Vol. 79, nr 5, s. 727-742Artikkel i tidsskrift (Fagfellevurdert)
  • 98. Staron, Miroslaw
    et al.
    Wohlin, Claes
    An Industrial Case Study on the Choice between Language Customization Mechanisms2006Konferansepaper (Fagfellevurdert)
  • 99. Stringfellow, Catherine
    et al.
    Andrews, Anneliese Amschler
    Wohlin, Claes
    Petersson, Håkan
    Estimating the Number of Components with Defects Post-Release that Showed No Defects in Testing.2002Inngår i: Software testing, verification & reliability, ISSN 0960-0833, E-ISSN 1099-1689, Vol. 12, nr 2, s. 93-122Artikkel i tidsskrift (Fagfellevurdert)
    Abstract [en]

    Components that have defects after release, but not during testing, are very undesirable as they point to 'holes' in the testing process. Either new components were not tested enough, or old ones were broken during enhancements and defects slipped through testing undetected. The latter is particularly pernicious, since customers are less forgiving when existing functionality is no longer working than when a new feature is not working quite properly. Rather than using capture-recapture models and curve-fitting methods to estimate the number of remaining defects after inspection, these methods are adapted to estimate the number of components with post-release defects that have no defects in testing. A simple experience-based method is used as a basis for comparison. The estimates can then be used to make decisions on whether or not to stop testing and release software. While most investigations so far have been experimental or have used virtual inspections to do a statistical validation, the investigation presented in this paper is a case study. This case study evaluates how well the capture-recapture, curve-fitting and experience-based methods work in practice. The results show that the methods work quite well. A further benefit of these techniques is that they can be applied to new systems for which no historical data are available and to releases that are very different from each other

  • 100. Svahnberg, Mikael
    et al.
    Aurum, Aybüke
    Wohlin, Claes
    Using Students as Subjects: an Empirical Evaluation2008Konferansepaper (Fagfellevurdert)
    Abstract [en]

    An important task in Requirements Engineering is to select which requirements that should go into a specific release of a system. This is a complex decision that requires balancing multiple perspectives against each other. In this article we investigate what students imagine is important to professionals in requirements selection. The reason for this is to understand whether the students are able to picture what industry professionals value, and whether the courses provided to them allow them to picture the state of industry practice. The results indicate that students have a good understanding of the way industry acts in the context of requirements selection, and students may work well as subjects in empirical studies in this area.

1234 51 - 100 of 170
RefereraExporteraLink til resultatlisten
Permanent link
Referera
Referensformat
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Annet format
Fler format
Språk
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Annet språk
Fler språk
Utmatningsformat
  • html
  • text
  • asciidoc
  • rtf