Change search
Refine search result
12 1 - 50 of 69
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Rows per page
  • 5
  • 10
  • 20
  • 50
  • 100
  • 250
Sort
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
Select
The maximal number of hits you can export is 250. When you want to export more records please use the Create feeds function.
  • 1. Azhar, Damir
    et al.
    Riddle, Patricia
    Mendes, Emilia
    Blekinge Institute of Technology, School of Computing.
    Mittas, Nikolaos
    Angelis, Lefteris
    Using ensembles for web effort estimation2013Conference paper (Refereed)
    Abstract [en]

    Background: Despite the number of Web effort estimation techniques investigated, there is no consensus as to which technique produces the most accurate estimates, an issue shared by effort estimation in the general software estimation domain. A previous study in this domain has shown that using ensembles of estimation techniques can be used to address this issue. Aim: The aim of this paper is to investigate whether ensembles of effort estimation techniques will be similarly successful when used on Web project data. Method: The previous study built ensembles using solo effort estimation techniques that were deemed superior. In order to identify these superior techniques two approaches were investigated: The first involved replicating the methodology used in the previous study, while the second approach used the Scott-Knott algorithm. Both approaches were done using the same 90 solo estimation techniques on Web project data from the Tukutuku dataset. The replication identified 16 solo techniques that were deemed superior and were used to build 15 ensembles, while the Scott-Knott algorithm identified 19 superior solo techniques that were used to build two ensembles. Results: The ensembles produced by both approaches performed very well against solo effort estimation techniques. With the replication, the top 12 techniques were all ensembles, with the remaining 3 ensembles falling within the top 17 techniques. These 15 effort estimation ensembles, along with the 2 built by the second approach, were grouped into the best cluster of effort estimation techniques by the Scott-Knott algorithm. Conclusion: While it may not be possible to identify a single best technique, the results suggest that ensembles of estimation techniques consistently perform well even when using Web project data

  • 2.
    Britto, Ricardo
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Freitas, Vitor
    Mendes, Emilia
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Usman, Muhammad
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Effort Estimation in Global Software Development: A systematic Literature Review2014In: Proceedings of the 2014 9th IEEE International Conference on Global Software Engineering, 2014, p. 135-144Conference paper (Refereed)
    Abstract [en]

    Nowadays, software systems are a key factor in the success of many organizations as in most cases they play a central role helping them attain a competitive advantage. However, despite their importance, software systems may be quite costly to develop, so substantially decreasing companies’ profits. In order to tackle this challenge, many organizations look for ways to decrease costs and increase profits by applying new software development approaches, like Global Software Development (GSD). Some aspects of the software project like communication, cooperation and coordination are more chal- lenging in globally distributed than in co-located projects, since language, cultural and time zone differences are factors which can increase the required effort to globally perform a software project. Communication, coordination and cooperation aspects affect directly the effort estimation of a project, which is one of the critical tasks related to the management of a software development project. There are many studies related to effort estimation methods/techniques for co-located projects. However, there are evidences that the co-located approaches do not fit to GSD. So, this paper presents the results of a systematic literature review of effort estimation in the context of GSD, which aimed at help both researchers and practitioners to have a holistic view about the current state of the art regarding effort estimation in the context of GSD. The results suggest that there is room to improve the current state of the art on effort estimation in GSD. 

    Download full text (pdf)
    fulltext
  • 3.
    Britto, Ricardo
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Mendes, Emilia
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Börstler, Jürgen
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    An Empirical Investigation on Effort Estimation in Agile Global Software Development2015In: Proceedings of the 2015 IEEE 10th International Conference on Global Software Engineering, 2015, p. 38-45Conference paper (Refereed)
    Abstract [en]

    Effort estimation is a project management activity that is mandatory for the execution of softwareprojects. Despite its importance, there have been just a few studies published on such activities within the Agile Global Software Development (AGSD) context. Their aggregated results were recently published as part of a secondary study that reported the state of the art on effort estimationin AGSD. This study aims to complement the above-mentioned secondary study by means of anempirical investigation on the state of the practice on effort estimation in AGSD. To do so, a survey was carried out using as instrument an on-line questionnaire and a sample comprising softwarepractitioners experienced in effort estimation within the AGSD context. Results show that the effortestimation techniques used within the AGSD and collocated contexts remained unchanged, with planning poker being the one employed the most. Sourcing strategies were found to have no or a small influence upon the choice of estimation techniques. With regard to effort predictors, globalchallenges such as cultural and time zone differences were reported, in addition to factors that are commonly considered in the collocated context, such as team experience. Finally, many challenges that impact the accuracy of the effort estimates were reported by the respondents, such as problems with the software requirements and the fact that the communication effort between sites is not properly accounted.

    Download full text (pdf)
    fulltext
  • 4.
    Britto, Ricardo
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Mendes, Emilia
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Wohlin, Claes
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    A Specialized Global Software Engineering Taxonomy for Effort Estimation2016In: International Conference on Global Software Engineering, IEEE Computer Society, 2016, p. 154-163Conference paper (Refereed)
    Abstract [en]

    To facilitate the sharing and combination of knowledge by Global Software Engineering (GSE) researchers and practitioners, the need for a common terminology and knowledge classification scheme has been identified, and as a consequence, a taxonomy and an extension were proposed. In addition, one systematic literature review and a survey on respectively the state of the art and practice of effort estimation in GSE were conducted, showing that despite its importance in practice, the GSE effort estimation literature is rare and reported in an ad-hoc way. Therefore, this paper proposes a specialized GSE taxonomy for effort estimation, which was built on the recently proposed general GSE taxonomy (including the extension) and was also based on the findings from two empirical studies and expert knowledge. The specialized taxonomy was validated using data from eight finished GSE projects. Our effort estimation taxonomy for GSE can help both researchers and practitioners by supporting the reporting of new GSE effort estimation studies, i.e. new studies are to be easier to identify, compare, aggregate and synthesize. Further, it can also help practitioners by providing them with an initial set of factors that can be considered when estimating effort for GSE projects.

  • 5.
    Britto, Ricardo
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Usman, Muhammad
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Mendes, Emilia
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Effort Estimation in Agile Global Software Development Context2014In: Agile Methods. Large-Scale Development, Refactoring, Testing, and Estimation: XP 2014 International Workshops, Rome, Italy, May 26-30, 2014, Revised Selected Papers, Springer, 2014, Vol. 199, p. 182-192Conference paper (Refereed)
    Abstract [en]

    Both Agile Software Development (ASD) and Global Software Development (GSD) are 21st century trends in the software industry. Many studies are reported in the literature wherein software companies have applied an agile method or practice GSD. Given that effort estimation plays a remarkable role in software project management, how do companies perform effort estimation when they use agile method in a GSD context? Based on two effort estimation Systematic Literature Reviews (SLR) - one in within the ASD context and the other in a GSD context, this paper reports a study in which we combined the results of these SLRs to report the state of the art of effort estimation in agile global software development (ASD) context.

    Download full text (pdf)
    fulltext
  • 6.
    Britto, Ricardo
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Wohlin, Claes
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Mendes, Emilia
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    An Extended Global Software Engineering Taxonomy2016In: Journal of Software Engineering Research and Development, ISSN 2195-1721, Vol. 4, no 3Article in journal (Refereed)
    Abstract [en]

    In Global Software Engineering (GSE), the need for a common terminology and knowledge classification has been identified to facilitate the sharing and combination of knowledge by GSE researchers and practitioners. A GSE taxonomy was recently proposed to address such a need, focusing on a core set of dimensions; however its dimensions do not represent an exhaustive list of relevant GSE factors. Therefore, this study extends the existing taxonomy, incorporating new GSE dimensions that were identified by means of two empirical studies conducted recently.

    Download full text (pdf)
    fulltext
  • 7.
    Dallora Moraes, Ana Luiza
    et al.
    Blekinge Institute of Technology, Faculty of Engineering, Department of Health.
    Anderberg, Peter
    Blekinge Institute of Technology, Faculty of Engineering, Department of Health.
    Kvist, Ola
    KI, SWE.
    Mendes, Emilia
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Ruiz, Sandra
    KI, SWE.
    Sanmartin Berglund, Johan
    Blekinge Institute of Technology, Faculty of Engineering, Department of Health.
    Bone age assessment with various machine learning techniques: A systematic literature review and meta-analysis2019In: PLOS ONE, E-ISSN 1932-6203, Vol. 14, no 7, article id e0220242Article, review/survey (Refereed)
    Abstract [en]

    Background The assessment of bone age and skeletal maturity and its comparison to chronological age is an important task in the medical environment for the diagnosis of pediatric endocrinology, orthodontics and orthopedic disorders, and legal environment in what concerns if an individual is a minor or not when there is a lack of documents. Being a time-consuming activity that can be prone to inter- and intra-rater variability, the use of methods which can automate it, like Machine Learning techniques, is of value. Objective The goal of this paper is to present the state of the art evidence, trends and gaps in the research related to bone age assessment studies that make use of Machine Learning techniques. Method A systematic literature review was carried out, starting with the writing of the protocol, followed by searches on three databases: Pubmed, Scopus and Web of Science to identify the relevant evidence related to bone age assessment using Machine Learning techniques. One round of backward snowballing was performed to find additional studies. A quality assessment was performed on the selected studies to check for bias and low quality studies, which were removed. Data was extracted from the included studies to build summary tables. Lastly, a meta-analysis was performed on the performances of the selected studies. Results 26 studies constituted the final set of included studies. Most of them proposed automatic systems for bone age assessment and investigated methods for bone age assessment based on hand and wrist radiographs. The samples used in the studies were mostly comprehensive or bordered the age of 18, and the data origin was in most of cases from United States and West Europe. Few studies explored ethnic differences. Conclusions There is a clear focus of the research on bone age assessment methods based on radiographs whilst other types of medical imaging without radiation exposure (e.g. magnetic resonance imaging) are not much explored in the literature. Also, socioeconomic and other aspects that could influence in bone age were not addressed in the literature. Finally, studies that make use of more than one region of interest for bone age assessment are scarce. Copyright: © 2019 Dallora et al. This is an open access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.

    Download full text (pdf)
    fulltext
  • 8.
    Felizardo, Katia
    et al.
    Federal University of Technology, BRA.
    De Souza, Erica
    Federal University of Technology, BRA.
    Falbo, Ricardo
    Federal University of Esp'rito Santo, BRA.
    Vijaykumar, Nandamudi
    Instituto Nacional de Pesquisas Espaciais, BRA.
    Mendes, Emilia
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering. Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Nakagawa, Elisayumi
    Universidade de Sao Paulo, BRA.
    Defining protocols of systematic literature reviews in software engineering: A survey2017In: Proceedings - 43rd Euromicro Conference on Software Engineering and Advanced Applications, SEAA 2017 / [ed] Felderer, M; Olsson, HH; Skavhaug, A, Institute of Electrical and Electronics Engineers Inc. , 2017, p. 202-209, article id 8051349Conference paper (Refereed)
    Abstract [en]

    Context: Despite being defined during the first phase of the Systematic Literature Review (SLR) process, the protocol is usually refined when other phases are performed. Several researchers have reported their experiences in applying SLRs in Software Engineering (SE) however, there is still a lack of studies discussing the iterative nature of the protocol definition, especially how it should be perceived by researchers conducting SLRs. Objective: The main goal of this study is to perform a survey aiming to identify: (i) the perception of SE researchers related to protocol definition; (ii) the activities of the review process that typically lead to protocol refinements; and (iii) which protocol items are refined in those activities. Method: A survey was performed with 53 SE researchers. Results: Our results show that: (i) protocol definition and pilot test are the two activities that most lead to further protocol refinements; (ii) data extraction form is the most modified item. Besides that, this study confirmed the iterative nature of the protocol definition. Conclusions: An iterative pilot testcan facilitate refinements in the protocol. © 2017 IEEE.

  • 9.
    Felizardo, Katia Romero
    et al.
    Fed Technol Univ Parana UTFPR CP, BRA.
    Mendes, Emilia
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering. Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Kalinowski, Marcos
    Fluminense Fed Univ, BRA.
    Souza, Erica Ferreira
    Fed Technol Univ Parana UTFPR CP, BRA.
    Vijaykumar, Nandamudi L.
    Natl Inst Space Res INPE, BRA.
    Using Forward Snowballing to update Systematic Reviews in Software Engineering2016In: ESEM'16: PROCEEDINGS OF THE 10TH ACM/IEEE INTERNATIONAL SYMPOSIUM ON EMPIRICAL SOFTWARE ENGINEERING AND MEASUREMENT, ASSOC COMPUTING MACHINERY , 2016Conference paper (Refereed)
    Abstract [en]

    Background: A Systematic Literature Review (SLR) is a methodology used to aggregate relevant evidence related to one or more research questions. Whenever new evidence is published after the completion of a SLR, this SLR should be updated in order to preserve its value. However, updating SLRs involves significant effort. Objective: The goal of this paper is to investigate the application of forward snowballing to support the update of SLRs. Method: We compare outcomes of an update achieved using the forward snowballing versus a published update using the search-based approach, i.e., searching for studies in electronic databases using a search string. Results: Forward snowballing showed a higher precision and a slightly lower recall. It reduced in more than five times the number of primary studies to filter however missed one relevant study. Conclusions: Due to its high precision, we believe that the use of forward snowballing considerably reduces the effort in updating SLRs in Software Engineering; however the risk of missing relevant papers should not be underrated.

  • 10.
    Guimarães, Gleyser
    et al.
    Federal University of Campina Grande, Brazil.
    Costa, Icaro
    VIRTUS Research, Development, and Innovation Center, Brazil.
    Perkusich, Mirko
    VIRTUS Research, Development, and Innovation Center, Brazil.
    Mendes, Emilia
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Santos, Danilo
    Federal University of Campina Grande, Brazil.
    Almeida, Hyggo
    Federal University of Campina Grande, Brazil.
    Perkusich, Angelo
    Federal University of Campina Grande, Brazil.
    Investigating the relationship between personalities and agile team climate: A replicated study2024In: Information and Software Technology, ISSN 0950-5849, E-ISSN 1873-6025, Vol. 169, article id 107407Article in journal (Refereed)
    Abstract [en]

    Context: A study in 2020 (S1) explored the relationship between personality traits and team climate perceptions of software professionals working in agile teams. S1 surveyed 43 software professionals from a large telecom company in Sweden and found that a person's ability to get along with team members (Agreeableness) influences significantly and positively the perceived level of team climate. Further, they observed that personality traits accounted for less than 15 % of the variance in team climate. Objective: The study described herein replicates S1 using data gathered from 148 software professionals from an industrial partner in Brazil. Method: We used the same research methods as S1. We employed a survey to gather the personality and climate data, which was later analyzed using correlation and regression analyses. The former aimed to measure the level of association between personality traits and climate and the latter to estimate team climate factors using personality traits as predictors. Results: The results for the correlation analyses showed statistically significant and positive associations between two personality traits - Agreeableness and Conscientiousness, and all five team climate factors. There was also a significant and positive association between Openness and Team Vision. Our results corroborate those from S1, with respect to two personality traits – Openness and Agreeableness; however, in S1, Openness was significantly and positively associated with Support for Innovation (not Team Vision). In regard to Agreeableness, in S1 it was also significantly and positively associated with perceived team climate. Furthermore, our regression models also support S1’s findings - personality traits accounted for less than 15 % of the variance in team climate. Conclusion: Despite variances in location, sample size, and operational domain, our study confirmed S1′s results on the limited influence of personality traits. Agreeableness and Openness were significant predictors for team climate, although the predictive factors differed. These discrepancies highlight the necessity for further research, incorporating larger samples and additional predictor variables, to better comprehend the intricate relationship between personality traits and team climate across diverse cultural and professional settings. © 2024 Elsevier B.V.

  • 11. Kalinowski, Marcos
    et al.
    Mendes, Emilia
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Travassos, G.H.
    An industry ready defect causal analysis approach exploring Bayesian networks2014In: Lecture Notes in Business Information Processing, Vienna: Springer , 2014, Vol. 166, p. 12-33Conference paper (Refereed)
    Abstract [en]

    Defect causal analysis (DCA) has shown itself an efficient means to improve the quality of software processes and products. A DCA approach exploring Bayesian networks, called DPPI (Defect Prevention-Based Process Improvement), resulted from research following an experimental strategy. Its conceptual phase considered evidence-based guidelines acquired through systematic reviews and feedback from experts in the field. Afterwards, in order to move towards industry readiness the approach evolved based on results of an initial proof of concept and a set of primary studies. This paper describes the experimental strategy followed and provides an overview of the resulting DPPI approach. Moreover, it presents results from applying DPPI in industry in the context of a real software development lifecycle, which allowed further comprehension and insights into using the approach from an industrial perspective.

  • 12. Kocaguneli, Ekrem
    et al.
    Menzies, Tim
    Mendes, Emilia
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Transfer learning in effort estimation2015In: Empirical Software Engineering, ISSN 1382-3256, E-ISSN 1573-7616, Vol. 20, no 3, p. 813-843Article in journal (Refereed)
    Abstract [en]

    When projects lack sufficient local data to make predictions, they try to transfer information from other projects. How can we best support this process? In the field of software engineering, transfer learning has been shown to be effective for defect prediction. This paper checks whether it is possible to build transfer learners for software effort estimation. We use data on 154 projects from 2 sources to investigate transfer learning between different time intervals and 195 projects from 51 sources to provide evidence on the value of transfer learning for traditional cross-company learning problems. We find that the same transfer learning method can be useful for transfer effort estimation results for the cross-company learning problem and the cross-time learning problem. It is misguided to think that: (1) Old data of an organization is irrelevant to current context or (2) data of another organization cannot be used for local solutions. Transfer learning is a promising research direction that transfers relevant cross data between time intervals and domains.

  • 13. Lokan, Chris
    et al.
    Mendes, Emilia
    Blekinge Institute of Technology, School of Computing.
    Investigating the use of duration-based moving windows to improve software effort prediction2012Conference paper (Refereed)
    Abstract [en]

    To date most research in software effort estimation has not taken into account any form of chronological split when selecting projects for training and testing sets. A chronological split represents the use of a project's starting and completion dates, such that any model that estimates effort for a new project p only uses as its training set projects that were completed prior to p's starting date. Three recent studies investigated the use of chronological splits, using a type of chronological split called a moving window, which represented a subset of the most recent projects completed prior to a project p's starting date. They found some evidence in favour of using windows whenever projects were recent. These studies all defined window sizes as being fixed numbers of recent projects. In practice, we suggest that estimators are more likely to think in terms of elapsed time than the size of the data set, when deciding which projects to include in a training set. Therefore, this paper investigates the effect on accuracy when using moving windows of various durations to form training sets on which to base effort estimates. Our results show that the use of windows based on duration can affect the accuracy of estimates (in this data set, a window of about three years duration appears best), but to a lesser extent than windows based on a fixed number of projects

  • 14. Lokan, Chris
    et al.
    Mendes, Emilia
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Investigating the use of duration-based moving windows to improve software effort prediction: A replicated study2014In: Information and Software Technology, ISSN 0950-5849, E-ISSN 1873-6025, Vol. 56, no 9, p. 1063-1075Article in journal (Refereed)
    Abstract [en]

    Context: Most research in software effort estimation has not considered chronology when selecting projects for training and testing sets. A chronological split represents the use of a projects starting and completion dates, such that any model that estimates effort for a new project p only uses as training data projects that were completed prior to p's start. Four recent studies investigated the use of chronological splits, using moving windows wherein only the most recent projects completed prior to a projects starting date were used as training data. The first three studies (S1-S3) found some evidence in favor of using windows; they all defined window sizes as being fixed numbers of recent projects. In practice, we suggest that estimators think in terms of elapsed time rather than the size of the data set, when deciding which projects to include in a training set. In the fourth study (S4) we showed that the use of windows based on duration can also improve estimation accuracy. Objective: This papers contribution is to extend S4 using an additional dataset, and to also investigate the effect on accuracy when using moving windows of various durations. Method: Stepwise multivariate regression was used to build prediction models, using all available training data, and also using windows of various durations to select training data. Accuracy was compared based on absolute residuals and MREs; the Wilcoxon test was used to check statistical significances between results. Accuracy was also compared against estimates derived from windows containing fixed numbers of projects. Results: Neither fixed size nor fixed duration windows provided superior estimation accuracy in the new data set. Conclusions: Contrary to intuition, our results suggest that it is not always beneficial to exclude old data when estimating effort for new projects. When windows are helpful, windows based on duration are effective.

  • 15.
    Lokan, Chris
    et al.
    UNSW Canberra, Australia.
    Mendes, Emilia
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Investigating the use of moving windows to improve software effort prediction: a replicated study2017In: Empirical Software Engineering, ISSN 1382-3256, E-ISSN 1573-7616, Vol. 22, no 2, p. 716-767Article in journal (Refereed)
    Abstract [en]

    To date most research in software effort estimation has not taken chronology into account when selecting projects for training and validation sets. A chronological split represents the use of a project’s starting and completion dates, such that any model that estimates effort for a new project p only uses as its training set projects that have been completed prior to p’s starting date. A study in 2009 (“S3”) investigated the use of chronological split taking into account a project’s age. The research question investigated was whether the use of a training set containing only the most recent past projects (a “moving window” of recent projects) would lead to more accurate estimates when compared to using the entire history of past projects completed prior to the starting date of a new project. S3 found that moving windows could improve the accuracy of estimates. The study described herein replicates S3 using three different and independent data sets. Estimation models were built using regression, and accuracy was measured using absolute residuals. The results contradict S3, as they do not show any gain in estimation accuracy when using windows for effort estimation. This is a surprising result: the intuition that recent data should be more helpful than old data for effort estimation is not supported. Several factors, which are discussed in this paper, might have contributed to such contradicting results. Some of our future work entails replicating this work using other datasets, to understand better when using windows is a suitable choice for software companies.

  • 16.
    Manzano, Martí
    et al.
    Universitat Politècnica de Catalunya, ESP.
    Ayala, Claudia P.
    Universitat Politècnica de Catalunya, ESP.
    Gómez, Cristina
    Universitat Politècnica de Catalunya, ESP.
    Abherve, Antonin
    Softeam Group, FRA.
    Franch, Xavier
    Universitat Politècnica de Catalunya, ESP.
    Mendes, Emilia
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering. Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    A Method to Estimate Software Strategic Indicators in Software Development: An Industrial Application2021In: Information and Software Technology, ISSN 0950-5849, E-ISSN 1873-6025, Vol. 129, article id 106433Article in journal (Refereed)
    Abstract [en]

    Context: Exploiting software development related data from software-development intensive organizations to support tactical and strategic decision making is a challenge. Combining data-driven approaches with expert knowledge has been highlighted as a sensible approach for leading software-development intensive organizations to rightful decision-making improvements. However, most of the existing proposals lack of important aspects that hinders their industrial uptake such as: customization guidelines to fit the proposals to other contexts and/or automatic or semi-automatic data collection support for putting them forward in a real organization. As a result, existing proposals are rarely used in the industrial context. Objective: Support software-development intensive organizations with guidance and tools for exploiting software development related data and expert knowledge to improve their decision making. Method: We have developed a novel method called SESSI (Specification and Estimation of Software Strategic Indicators) that was articulated from industrial experiences with Nokia, Bittium, Softeam and iTTi in the context of Q-Rapids European project following a design science approach. As part of the industrial summative evaluation, we performed the first case study focused on the application of the method. Results: We detail the phases and steps of the SESSI method and illustrate its application in the development of ModelioNG, a software product of Modeliosoft development firm. Conclusion: The application of the SESSI method in the context of ModelioNG case study has provided us with useful feedback to improve the method and has evidenced that applying the method was feasible in this context. © 2020 Elsevier B.V.

  • 17.
    Mendes, Emilia
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Applying a knowledge management technique to improve risk assessment and effort estimation of healthcare software projects2014In: Communications in Computer and Information Science, ISSN 1865-0929, E-ISSN 1865-0937, Vol. 457, p. 40-56Article in journal (Refereed)
    Abstract [en]

    One of the pillars for sound Software Project Management is reliable effort estimation. Therefore it is important to fully identify what are the fundamental factors that affect an effort estimate for a new project and how these factors are inter-related. This paper describes a case study where a Knowledge Management technique was employed to build an expert-based effort estimation model to estimate effort for healthcare software projects. This model was built with the participation of seven project managers, and was validated using data from 22 past finished projects. The model led to numerous changes in process and also in business. The company adapted their existing effort estimation process to be in line with the model that was created, and the use of a mathematically- based model also led to an increase in the number of projects being delegated to this company by other company branches worldwide.

  • 18.
    Mendes, Emilia
    Blekinge Institute of Technology, School of Computing.
    Improving Software Effort Estimation Using an Expert-centred Approach2012In: Lecture Notes in Computer Science, Springer , 2012, Vol. 7623, p. 18-33Conference paper (Refereed)
    Abstract [en]

    A cornerstone of software project management is effort estimation, the process by which effort is forecasted and used as basis to predict costs and allocate resources effectively, so enabling projects to be delivered on time and within budget. Effort estimation is a very complex domain where the relationship between factors is non-deterministic and has an inherently uncertain nature, and where corresponding decisions and predictions require reasoning with uncertainty. Most studies in this field, however, have to date investigated ways to improve software effort estimation by proposing and comparing techniques to build effort prediction models where such models are built solely from data on past software projects - data-driven models. The drawback with such approach is threefold: first, it ignores the explicit inclusion of uncertainty, which is inherent to the effort estimation domain, into such models; second, it ignores the explicit representation of causal relationships between factors; third, it relies solely on the variables being part of the dataset used for model building, under the assumption that those variables represent the fundamental factors within the context of software effort prediction. Recently, as part of a New Zealand and later on Brazilian government-funded projects, we investigated the use of an expert-centred approach in combination with a technique that enables the explicit inclusion of uncertainty and causal relationships as means to improve software effort estimation. This paper will first provide an overview of the effort estimation process, followed by the discussion of how an expert-centred approach to improving such process can be advantageous to software companies. In addition, we also detail our experience building and validating six different expert-based effort estimation models for ICT companies in New Zealand and Brazil. Post-mortem interviews with the participating companies showed that they found the entire process extremely beneficial and worthwhile, and that all the models created remained in use by those companies. Finally, the methodology focus of this paper, which focuses on expert knowledge elicitation and participation, can be employed not only to improve a software effort estimation process, but also to improve other project management-related activities.

  • 19.
    Mendes, Emilia
    Blekinge Institute of Technology, School of Computing.
    Using expert-based bayesian networks as decision support systems to improve project management of healthcare software projects2013Conference paper (Refereed)
    Abstract [en]

    One of the pillars for sound Software Project Management is reliable effort estimation. Therefore it is important to fully identify what are the fundamental factors that affect an effort estimate for a new project and how these factors are inter-related. This paper describes a case study where a Bayesian Network model to estimate effort for healthcare software projects was built. This model was solely elicited from expert knowledge, with the participation of seven project managers, and was validated using data from 22 past finished projects. The model led to numerous changes in process and also in business. The company adapted their existing effort estimation process to be in line with the model that was created, and the use of a mathematically-based model also led to an increase in the number of projects being delegated to this company by other company branches worldwide.

  • 20.
    Mendes, Emilia
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Ali, Nauman bin
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Counsell, Steve
    Brunel University London, GBR.
    Baldassare, Maria Teresa
    Università degli Studi di Bari, ITA.
    Special issue on evaluation and assessment in software engineering2019In: Journal of Systems and Software, ISSN 0164-1212, E-ISSN 1873-1228, Vol. 151, p. 224-225Article in journal (Refereed)
  • 21.
    Mendes, Emilia
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Felizardo, Katia
    Universidade Tecnologica Federal do Parana, BRA.
    Wohlin, Claes
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Kalinowski, M.
    Pontificia Universidade Catolica do Rio de Janeiro, BRA.
    Search Strategy to Update Systematic Literature Reviews in Software Engineering2019In: EUROMICRO Conference Proceedings, Institute of Electrical and Electronics Engineers Inc. , 2019, p. 355-362Conference paper (Refereed)
    Abstract [en]

    [Context] Systematic Literature Reviews (SLRs) have been adopted within the Software Engineering (SE) domain for more than a decade to provide meaningful summaries of evidence on several topics. Many of these SLRs are now outdated, and there are no standard proposals on how to update SLRs in SE. [Objective] The goal of this paper is to provide recommendations on how to best to search for evidence when updating SLRs in SE. [Method] To achieve our goal, we compare and discuss outcomes from applying different search strategies to identifying primary studies in a previously published SLR update on effort estimation. [Results] The use of a single iteration forward snowballing with Google Scholar, and employing the original SLR and its primary studies as a seed set seems to be the most cost-effective way to search for new evidence when updating SLRs. [Conclusions] The recommendations can be used to support decisions on how to update SLRs in SE. © 2019 IEEE.

  • 22.
    Mendes, Emilia
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Freitas, Vitor
    Oulu University, FIN.
    Perkusich, Mirko
    Universidade Federal de Campina Grande, BRA.
    Nunes, João
    Universidade Federal de Campina Grande, BRA.
    Ramos, Felipe
    Universidade Federal de Campina Grande, BRA.
    Costa, Alexandre
    Universidade Federal de Campina Grande, BRA.
    Saraiva, Renata
    Universidade Federal de Campina Grande, BRA.
    Freire, Arthur
    Universidade Federal de Campina Grande, BRA.
    Using Bayesian Network to Estimate the Value of Decisions within the Context of Value-Based Software Engineering: A Multiple Case Study2019In: International journal of software engineering and knowledge engineering, ISSN 0218-1940, Vol. 29, no 11-12, p. 1629-1671Article in journal (Refereed)
    Abstract [en]

    Companies must make a paradigm shift in which both short- and long-term value aspects are employed to guide their decision-making. Such need is pressing in innovative industries, such as ICT, and is the core of Value-based Software Engineering (VBSE). Objective: This paper details three case studies where value estimation models using Bayesian Network (BN) were built and validated. These estimation models were based upon value-based decisions made by key stakeholders in the contexts of feature selection, test cases execution prioritization, and user interfaces design selection. Methods: All three case studies were carried out according to a Framework called VALUE - improVing decision-mAking reLating to software-intensive prodUcts and sErvices development. This framework includes a mixed-methods approach, comprising several steps to build and validate company-specific value estimation models. Such a building process uses as input data key stakeholders' decisions (gathered using the Value tool), plus additional input from key stakeholders. Results: Three value estimation BN models were built and validated, and the feedback received from the participating stakeholders was very positive. Conclusions: We detail the building and validation of three value estimation BN models, using a combination of data from past decision-making meetings and also input from key stakeholders. © 2019 World Scientific Publishing Company.

  • 23.
    Mendes, Emilia
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Kalinowski, M.
    Martins, D.
    Ferrucci, F.
    Sarro, F.
    Cross- vs. Within-company cost estimation studies revisited: An extended systematic review2014Conference paper (Refereed)
    Abstract [en]

    The objective of this paper is to extend a previously conducted systematic literature review (SLR) that investigated under what circumstances individual organizations would be able to rely on cross-company based estimation models. [Method] We applied the same methodology used in the SLR we are extending herein (covering the period 2006-2013) based on primary studies that compared predictions from cross-company models with predictions from within-company models constructed from analysis of project data. [Results] We identified 11 additional papers; however two of these did not present independent results and one had inconclusive findings. Two of the remaining eight papers presented both, trials where cross-company predictions were not significantly different from within-company predictions and others where they were significantly different. Four found that cross-company models gave prediction accuracy significantly different from within-company models (one of them in favor of cross-company models), while two found no significant difference. The main pattern when examining the study related factors was that studies where cross-company predictions were significantly different from within-company predictions employed larger within-company data sets. [Conclusions] Overall, half of the analyzed evidence indicated that cross-company estimation models are not significantly worse than within-company estimation models. Moreover, there is some evidence that sample size does not imply in higher estimation accuracy, and that samples for building estimation models should be carefully selected/filtered based on quality control and project similarity aspects. The results need to be combined with the findings from the SLR we are extending to allow further investigating this topic.

  • 24.
    Mendes, Emilia
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering. Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Rodriguez, Pilar
    University of Oulu, FIN.
    Freitas, Vitor
    University of Oulu, FIN.
    Baker, Simon
    University of Cambridge, GBR.
    Atoui, Mohamed Amine
    University of Oulu, FIN.
    Correction to: Towards improving decision making and estimating the value of decisions in value-based software engineering: the VALUE framework (Software Quality Journal, (2018), 26, 2, (607-656), 10.1007/s11219-017-9360-z)2018In: Software quality journal, ISSN 0963-9314, E-ISSN 1573-1367, Vol. 26, no 4, p. 1595-1596Article in journal (Other academic)
    Abstract [en]

    The original version of this article unfortunately contained a mistake in Figs. 1 and 21. © 2018, Springer Science+Business Media, LLC, part of Springer Nature.

  • 25.
    Mendes, Emilia
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Rodriguez, Pilar
    University of Oulu, FIN.
    Freitas, Vitor
    University of Oulu, FIN.
    Baker, Simon
    University of Cambridge, GBR.
    Atoui, Mohamed Amine
    University of Oulu, FIN.
    Towards improving decision making and estimating the value of decisions in value-based software engineering: the VALUE framework2018In: Software quality journal, ISSN 0963-9314, E-ISSN 1573-1367, Vol. 26, no 2, p. 607-656Article in journal (Refereed)
    Abstract [en]

    To sustain growth, maintain competitive advantage and to innovate, companies must make a paradigm shift in which both short- and long-term value aspects are employed to guide their decision-making. Such need is clearly pressing in innovative industries, such as ICT, and is also the core of Value-based Software Engineering (VBSE). The goal of this paper is to detail a framework called VALUE—improving decision-making relating to software-intensive products and services development—and to show its application in practice to a large ICT company in Finland. The VALUE framework includes a mixed-methods approach, as follows: to elicit key stakeholders’ tacit knowledge regarding factors used during a decision-making process, either transcripts from interviews with key stakeholders are analysed and validated in focus group meetings or focus-group meeting(s) are directly applied. These value factors are later used as input to a Web-based tool (Value tool) employed to support decision making. This tool was co-created with four industrial partners in this research via a design science approach that includes several case studies and focus-group meetings. Later, data on key stakeholders’ decisions gathered using the Value tool, plus additional input from key stakeholders, are used, in combination with the Expert-based Knowledge Engineering of Bayesian Network (EKEBN) process, coupled with the weighed sum algorithm (WSA) method, to build and validate a company-specific value estimation model. The application of our proposed framework to a real case, as part of an ongoing collaboration with a large software company (company A), is presented herein. Further, we also provide a detailed example, partially using real data on decisions, of a value estimation Bayesian network (BN) model for company A. This paper presents some empirical results from applying the VALUE Framework to a large ICT company; those relate to eliciting key stakeholders’ tacit knowledge, which is later used as input to a pilot study where these stakeholders employ the Value tool to select features for one of their company’s chief products. The data on decisions obtained from this pilot study is later applied to a detailed example on building a value estimation BN model for company A. We detail a framework—VALUE framework—to be used to help companies improve their value-based decisions and to go a step further and also estimate the overall value of each decision. © 2017 The Author(s)

  • 26.
    Mendes, Emilia
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Vaz, Veronica Taquete
    UFRJ Fed Univ Rio De Janeiro, POB 68511, Rio De Janeiro, Brazil..
    Muradas, Fernando
    Naval Syst Anal Ctr, BR-20091000 San Diego, CA, Brazil..
    An Expert-Based Requirements Effort Estimation Model Using Bayesian Networks2016In: SOFTWARE QUALITY: THE FUTURE OF SYSTEMS- AND SOFTWARE DEVELOPMENT, 2016, p. 79-93Conference paper (Refereed)
    Abstract [en]

    [Motivation]: There are numerous software companies worldwide that split the software development life cycle into at least two separate projects an initial project where a requirements specification document is prepared; and a follow-up project where the previously prepared requirements document is used as input to developing a software application. These follow-up projects can also be delegated to a third party, as occurs in numerous global software development scenarios. Effort estimation is one of the cornerstones of any type of project management; however, a systematic literature review on requirements effort estimation found hardly any empirical study investigating this topic. [Objective]: The goal of this paper is to describe an industrial case study where an expert-based requirements effort estimation model was built and validated for the Brazilian Navy. [Method]: A knowledge engineering of Bayesian networks process was employed to build the requirements effort estimation model. [Results]: The expert-based requirements effort estimation model was built with the participation of seven software requirements analysts and project managers, leading to 28 prediction factors and 30+ relationships. The model was validated based on real data from 11 large requirements specification projects. The model was incorporated into the Brazilian navy's quality assurance process to be used by their software requirements analysts and managers. [Conclusion]: This paper details a case study where an expert-based requirements effort estimation model based solely on knowledge from requirements analysts and project managers was successfully built to help the Brazilian Navy estimate the requirements effort for their projects.

  • 27.
    Mendes, Emilia
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering. Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Viana, Davi
    Univ Fed Maranhao, BRA.
    Vishnubhotla, Sai Datta
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Lundberg, Lars
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Realising Individual and Team Capability in Agile Software Development: A Qualitative Investigation2018In: Proceedings - 44th Euromicro Conference on Software Engineering and Advanced Applications, SEAA 2018 / [ed] Bures, T Angelis, L, IEEE , 2018, p. 183-190Conference paper (Refereed)
    Abstract [en]

    Several studies have shown that both individual and team capability can affect software development performance and project success; a deeper understating of such phenomena is crucial within the context of Agile Software Development (ASD), given that its workforce is a key source of agility. This paper contributes towards such understanding by means of a case study that uses data from 14 interviews carried out at a large telecommunications company, within the context of a mobile money transfer system developed in Sweden and India, to identify individual and team capability measures used to form productive teams. Our results identified 10 individual and five team capability measures, of which, respectively, five and four have not been previously characterised by a systematic literature review (SLR) on this same topic. Such review aggregated evidence for a total of 133 individual and 28 team capability measures. Further work entails extending our findings via interviewing other software/software-intensive industries practicing ASD.

  • 28.
    Mendes, Emilia
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering. Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Winkler, Dietmar
    Technische Universitat Wien, AUT.
    Special issue on “software quality in software-intensive systems”2018In: Software quality journal, ISSN 0963-9314, E-ISSN 1573-1367, Vol. 26, no 2, p. 657-660Article in journal (Refereed)
  • 29.
    Mendes, Emilia
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering. Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Wohlin, Claes
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Felizardo, Katia
    Federal Technological University of Paraná, BRA.
    Kalinowski, Marcos
    Pontifical Catholic University of Rio de Janeiro (PUC-Rio), BRA.
    When to update systematic literature reviews in software engineering2020In: Journal of Systems and Software, ISSN 0164-1212, E-ISSN 1873-1228, Vol. 167, article id 110607Article in journal (Refereed)
    Abstract [en]

    [Context] Systematic Literature Reviews (SLRs) have been adopted by the Software Engineering (SE) community for approximately 15 years to provide meaningful summaries of evidence on several topics. Many of these SLRs are now potentially outdated, and there are no systematic proposals on when to update SLRs in SE. [Objective] The goal of this paper is to provide recommendations on when to update SLRs in SE. [Method] We evaluated, using a three-step approach, a third-party decision framework (3PDF) employed in other fields, to decide whether SLRs need updating. First, we conducted a literature review of SLR updates in SE and contacted the authors to obtain their feedback relating to the usefulness of the 3PDF within the context of SLR updates in SE. Second, we used these authors’ feedback to see whether the framework needed any adaptation; none was suggested. Third, we applied the 3PDF to the SLR updates identified in our literature review. [Results] The 3PDF showed that 14 of the 20 SLRs did not need updating. This supports the use of a decision support mechanism (such as the 3PDF) to help the SE community decide when to update SLRs. [Conclusions] We put forward that the 3PDF should be adopted by the SE community to keep relevant evidence up to date and to avoid wasting effort with unnecessary updates. © 2020

  • 30.
    Mendes, Fabiana
    et al.
    University of Oulu, FIN.
    Mendes, Emilia
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Salleh, Norsaremah
    IIUM, P.O., MYS.
    Oivo, Markku
    University of Oulu, FIN.
    Insights on the relationship between decision-making style and personality in software engineering2021In: Information and Software Technology, ISSN 0950-5849, E-ISSN 1873-6025, Vol. 136, article id 106586Article in journal (Refereed)
    Abstract [en]

    Context: Software development involves many activities, and decision making is an essential one. Various factors can impact a decision-making process, and by understanding such factors, one can improve the process. Since people are the ones making decisions, some human-related aspects are amongst those influencing factors. One such aspect is the decision maker's personality. Objective: This research investigates the relationship between decision-making style and personality within the context of software project development. Method: We conducted a survey in a population of Brazilian software engineers to gather data on their personality and decision-making style. Results: Data from 63 participants was gathered and resulted in the identification of seven statistically significant correlations between decision-making style and personality (personality factor and personality facets). Furthermore, we built a regression model in which decision-making style (DMS) was the response variable and personality factors the independent variables. The backward elimination procedure selected only agreeableness to explain 4.2% of DMS variation. The model accuracy was evaluated and deemed good enough. Regarding the moderation effect of demographic variables (age, educational level, experience, and role) on the relationship between DMS and Agreeableness, the analysis showed that only software engineers’ role has such effect. Conclusion: This paper contributes toward understanding the relationship between DMS and personality. Results show that the personality variable agreeableness can explain the variation in decision-making style. Furthermore, someone's role in a software development project can impact the strength of the relationship between DMS and agreeableness. © 2021 Elsevier B.V.

  • 31. Minku, Leandro
    et al.
    Sarro, Federica
    Mendes, Emilia
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Ferrucci, Filomena
    How to Make Best Use of Cross-Company Data for Web Effort Estimation?2015In: 2015 ACM/IEEE INTERNATIONAL SYMPOSIUM ON EMPIRICAL SOFTWARE ENGINEERING AND MEASUREMENT (ESEM), 2015, p. 172-181Conference paper (Refereed)
    Abstract [en]

    [Context]: The numerous challenges that can hinder software companies from gathering their own data have motivated over the past 15 years research on the use of cross-company (CC) datasets for software effort prediction. Part of this research focused on Web effort prediction, given the large increase worldwide in the development of Web applications. Some of these studies indicate that it may be possible to achieve better performance using CC models if some strategy to make the CC data more similar to the within-company (WC) data is adopted. [Goal]: This study investigates the use of a recently proposed approach called Dycom to assess to what extent Web effort predictions obtained using CC datasets are effective in relation to the predictions obtained using WC data when explicitly mapping the CC models to the WC context. [Method]: Data on 125 Web projects from eight different companies part of the Tukutuku database were used to build prediction models. We benchmarked these models against baseline models (mean and median effort) and a WC base learner that does not benefit of the mapping. We also compared Dycom against a competitive CC approach from the literature (NN-filtering). We report a company-by-company analysis. [Results]: Dycom usually managed to achieve similar or better performance than a WC model while using only half of the WC training data. These results are also an improvement over previous studies that investigated the use of different strategies to adapt CC models to the WC data for Web effort estimation. [Conclusions]: We conclude that the use of Dycom for Web effort prediction is quite promising and in general supports previous results when applying Dycom to conventional software datasets.

  • 32.
    Molleri, Jefferson
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Petersen, Kai
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Mendes, Emilia
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering. Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Survey Guidelines in Software Engineering: An Annotated Review2016In: ESEM'16: PROCEEDINGS OF THE 10TH ACM/IEEE INTERNATIONAL SYMPOSIUM ON EMPIRICAL SOFTWARE ENGINEERING AND MEASUREMENT, ASSOC COMPUTING MACHINERY , 2016Conference paper (Refereed)
    Abstract [en]

    Background: Survey is a method of research aiming to gather data from a large population of interest. Despite being extensively used in software engineering, survey-based research faces several challenges, such as selecting a representative population sample and designing the data collection instruments. Objective: This article aims to summarize the existing guidelines, supporting instruments and recommendations on how to conduct and evaluate survey-based research. Methods: A systematic search using manual search and snowballing techniques were used to identify primary studies supporting survey research in software engineering. We used an annotated review to present the findings, describing the references of interest in the research topic. Results: The summary provides a description of 15 available articles addressing the survey methodology, based upon which we derived a set of recommendations on how to conduct survey research, and their impact in the community. Conclusion: Survey-based research in software engineering has its particular challenges, as illustrated by several articles in this review. The annotated review can contribute by raising awareness of such challenges and present the proper recommendations to overcome them.

  • 33.
    Molléri, Jefferson Seide
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Felderer, Michael
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Mendes, Emilia
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering. Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Petersen, Kai
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Reasoning about Research Quality Alignment in Software EngineeringManuscript (preprint) (Other academic)
    Abstract [en]

    Context: Research quality is intended to assess the design and reporting of studies. It comprises a series of concepts such as methodological rigor, practical relevance, and conformance to ethical standards. Depending on the perspective, different views of importance are given to the conceptual dimensions of research quality.

    Objective: We aim to better understand what constitutes research quality from the perspective of the empirical software engineering community. In particular, we intend to assess the level of alignment between researchers with regard to a conceptual model of research quality.

    Method: We conducted a mixed methods approach comprising an internal case study and a complementary focus group. We carried out a hierarchical voting prioritization based on the conceptual model to collect relative values for importance. In the focus group, we also moderate discussions with experts to address potential misalignment.

    Results: We provide levels of alignment with regard to the importance of quality dimensions in the view of the participants. Moreover, the conceptual model fairly expresses the quality of research but has limitations with regards the structure and description of its components.

    Conclusion: Based on the results, we revised the conceptual model and provided an updated version adjusted to the context of empirical software engineering research. We also discussed how to assess quality alignment in research using our approach, and how to use the revised model of quality to characterize an assessment instrument.

    Download full text (pdf)
    fulltext
  • 34.
    Molléri, Jefferson Seide
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Mendes, Emilia
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering. Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Petersen, Kai
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Felderer, Michael
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Aligning the Views of Research Quality in Empirical Software EngineeringManuscript (preprint) (Other academic)
    Abstract [en]

    Context: Research quality is intended to assess the design and reporting of studies. It comprises a series of concepts such as methodological rigor, practical relevance, and conformance to ethical standards. Depending on the perspective, different views of importance are given to the conceptual dimensions of research quality.

    Objective: We intend to assess the level of alignment between researchers with regard to a conceptual model of research quality. This includes aligning the definition of research quality and reasoning on the relative importance of quality characteristics.

    Method: We conducted a mixed methods approach comprising an internal case study and a complementary focus group. We carried out a hierarchical voting prioritization based on the conceptual model to collect relative values for importance. In the focus group, we also moderate discussions with experts to address potential misalignment.

    Results: The alignment at the research group level was higher compared to that at community level. Moreover, the interdisciplinary conceptual quality model was seeing to express fairly the quality of research, but presented limitations regarding its structure and components' description, which resulted in an updated model. 

    Conclusion: The interdisciplinary model used was suitable for the software engineering context. The process used for reflecting on the alignment of quality with respect to definitions and priorities was working well. 

  • 35.
    Molléri, Jefferson Seide
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Mendes, Emilia
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering. Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Petersen, Kai
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Felderer, Michael
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Determining a core view of research quality in empirical software engineering2023In: Computer Standards & Interfaces, ISSN 0920-5489, E-ISSN 1872-7018, Vol. 84, article id 103688Article in journal (Refereed)
    Abstract [en]

    Context: Research quality is intended to appraise the design and reporting of studies. It comprises a set of standards such as methodological rigor, practical relevance, and conformance to ethical standards. Depending on the perspective, different views of importance are given to the standards for research quality. Objective: To investigate the suitability of a conceptual model of research quality to Software Engineering (SE), from the perspective of researchers engaged in Empirical Software Engineering (ESE) research, in order to understand the core value of research quality. Method: We conducted a mixed-methods approach with two distinct group perspectives: (i) a research group; and (ii) the empirical SE research community. Our data collection approach comprised a questionnaire survey and a complementary focus group. We carried out a hierarchical voting prioritization to collect relative values for importance of standards for research quality. Results: In the context of this research, ‘internally valid’, ‘relevant research idea’, and ‘applicable results’ are perceived as the core standards for research quality in empirical SE. The alignment at the research group level was higher compared to that at the community level. Conclusion: The conceptual model was seen to express fairly the standards for research quality in the SE context. It presented limitations regarding its structure and components’ description, which resulted in an updated model. © 2022

  • 36.
    Molléri, Jefferson Seide
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Petersen, Kai
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Mendes, Emilia
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering. Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    An Empirically Evaluated Checklist for Surveys in Software Engineering2020In: Information and Software Technology, ISSN 0950-5849, E-ISSN 1873-6025, article id 106240Article in journal (Refereed)
    Abstract [en]

    Context: Over the past decade Software Engineering research has seen a steady increase in survey-based studies, and there are several guidelines providing support for those willing to carry out surveys. The need for auditing survey research has been raised in the literature. Checklists have been used to assess different types of empirical studies, such as experiments and case studies.

    Objective: This paper proposes a checklist to support the design and assessment of survey-based research in software engineering grounded in existing guidelines for survey research. We further evaluated the checklist in the research practice context.

    Method: To construct the checklist, we systematically aggregated knowledge from 12 methodological studies supporting survey-based research in software engineering. We identified the key stages of the survey process and its recommended practices through thematic analysis and vote counting. To improve our initially designed checklist we evaluated it using a mixed evaluation approach involving experienced researchers.

    Results: The evaluation provided insights regarding the limitations of the checklist in relation to its understanding and objectivity. In particular, 19 of the 38 checklist items were improved according to the feedback received from its evaluation. Finally, a discussion on how to use the checklist and what its implications are for research practice is also provided.

    Conclusion: The proposed checklist is an instrument suitable for auditing survey reports as well as a support tool to guide ongoing research with regard to the survey design process.

    Download full text (pdf)
    fulltext
  • 37.
    Molléri, Jefferson Seide
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Petersen, Kai
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Mendes, Emilia
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering. Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    CERSE - Catalog for empirical research in software engineering: A Systematic mapping study2019In: Information and Software Technology, ISSN 0950-5849, E-ISSN 1873-6025, Vol. 105, p. 117-149Article in journal (Refereed)
    Abstract [en]

    Context Empirical research in software engineering contributes towards developing scientific knowledge in this field, which in turn is relevant to inform decision-making in industry. A number of empirical studies have been carried out to date in software engineering, and the need for guidelines for conducting and evaluating such research has been stressed. Objective: The main goal of this mapping study is to identify and summarize the body of knowledge on research guidelines, assessment instruments and knowledge organization systems on how to conduct and evaluate empirical research in software engineering. Method: A systematic mapping study employing manual search and snowballing techniques was carried out to identify the suitable papers. To build up the catalog, we extracted and categorized information provided by the identified papers. Results: The mapping study comprises a list of 341 methodological papers, classified according to research methods, research phases covered, and type of instrument provided. Later, we derived a brief explanatory review of the instruments provided for each of the research methods. Conclusion: We provide: an aggregated body of knowledge on the state of the art relating to guidelines, assessment instruments and knowledge organization systems for carrying out empirical software engineering research; an exemplary usage scenario that can be used to guide those carrying out such studies is also provided. Finally, we discuss the catalog's implications for research practice and the needs for further research. © 2018 Elsevier B.V.

  • 38.
    Molléri, Jefferson Seide
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Petersen, Kai
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Mendes, Emilia
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Towards understanding the relation between citations and research quality in software engineering studies2018In: Scientometrics, ISSN 0138-9130, E-ISSN 1588-2861, Vol. 117, no 3, p. 1453-1487Article in journal (Refereed)
    Abstract [en]

    The importance of achieving high quality in research practice has been highlighted in different disciplines. At the same time, citations are utilized to measure the impact of academic researchers and institutions. One open question is whether the quality in the reporting of research is related to scientific impact, which would be desired. In this exploratory study we aim to: (1) Investigate how consistently a scoring rubric for rigor and relevance has been used to assess research quality of software engineering studies; (2) Explore the relationship between rigor, relevance and citation count. Through backward snowball sampling we identified 718 primary studies assessed through the scoring rubric. We utilized cluster analysis and conditional inference tree to explore the relationship between quality in the reporting of research (represented by rigor and relevance) and scientiometrics (represented by normalized citations). The results show that only rigor is related to studies’ normalized citations. Besides that, confounding factors are likely to influence the number of citations. The results also suggest that the scoring rubric is not applied the same way by all studies, and one of the likely reasons is because it was found to be too abstract and in need to be further refined. Our findings could be used as a basis to further understand the relation between the quality in the reporting of research and scientific impact, and foster new discussions on how to fairly acknowledge studies for performing well with respect to the emphasized research quality. Furthermore, we highlighted the need to further improve the scoring rubric. © 2018, The Author(s).

    Download full text (pdf)
    fulltext
  • 39.
    Moraes, Ana Louiza Dallora
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Eivazzadeh, Shahryar
    Blekinge Institute of Technology, Faculty of Engineering, Department of Health.
    Mendes, Emilia
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Sanmartin Berglund, Johan
    Blekinge Institute of Technology, Faculty of Engineering, Department of Health.
    Anderberg, Peter
    Blekinge Institute of Technology, Faculty of Engineering, Department of Health.
    Prognosis of Dementia Employing Machine Learning and Microsimulation Techniques: A Systematic Literature Review2016In: Procedia Computer Science / [ed] Martinho R.,Rijo R.,Cruz-Cunha M.M.,Bjorn-Andersen N.,Quintela Varajao J.E., Elsevier, 2016, Vol. 100, p. 480-488Conference paper (Refereed)
    Abstract [en]

    OBJECTIVE: The objective of this paper is to investigate the goals and variables employed in the machine learning and microsimulation studies for the prognosis of dementia. METHOD: According to preset protocols, the Pubmed, Socups and Web of Science databases were searched to find studies that matched the defined inclusion/exclusion criteria, and then its references were checked for new studies. A quality checklist assessed the selected studies, and removed the low quality ones. The remaining ones (included set) had their data extracted and summarized. RESULTS: The summary of the data of the 37 included studies showed that the most common goal of the selected studies was the prediction of the conversion from mild cognitive impairment to Alzheimer's Disease, for studies that used machine learning, and cost estimation for the microsimulation ones. About the variables, neuroimaging was the most frequent used. CONCLUSIONS: The systematic literature review showed clear trends in prognosis of dementia research in what concerns machine learning techniques and microsimulation.

  • 40.
    Moraes, Ana Luiza Dallora
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Eivazzadeh, Shahryar
    Blekinge Institute of Technology, Faculty of Engineering, Department of Health.
    Mendes, Emilia
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering. Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Berglund, Johan
    Blekinge Institute of Technology, Faculty of Engineering, Department of Health.
    Anderberg, Peter
    Blekinge Institute of Technology, Faculty of Engineering, Department of Health.
    Machine learning and microsimulation techniques on the prognosis of dementia: A systematic literature review2017In: PLOS ONE, E-ISSN 1932-6203, Vol. 12, no 6, article id e0179804Article in journal (Refereed)
    Abstract [en]

    Background Dementia is a complex disorder characterized by poor outcomes for the patients and high costs of care. After decades of research little is known about its mechanisms. Having prognostic estimates about dementia can help researchers, patients and public entities in dealing with this disorder. Thus, health data, machine learning and microsimulation techniques could be employed in developing prognostic estimates for dementia. Objective The goal of this paper is to present evidence on the state of the art of studies investigating and the prognosis of dementia using machine learning and microsimulation techniques. Method To achieve our goal we carried out a systematic literature review, in which three large databases -Pubmed, Socups and Web of Science were searched to select studies that employed machine learning or microsimulation techniques for the prognosis of dementia. A single backward snowballing was done to identify further studies. A quality checklist was also employed to assess the quality of the evidence presented by the selected studies, and low quality studies were removed. Finally, data from the final set of studies were extracted in summary tables. Results In total 37 papers were included. The data summary results showed that the current research is focused on the investigation of the patients with mild cognitive impairment that will evolve to Alzheimer's disease, using machine learning techniques. Microsimulation studies were concerned with cost estimation and had a populational focus. Neuroimaging was the most commonly used variable. Conclusions Prediction of conversion from MCI to AD is the dominant theme in the selected studies. Most studies used ML techniques on Neuroimaging data. Only a few data sources have been recruited by most studies and the ADNI database is the one most commonly used. Only two studies have investigated the prediction of epidemiological aspects of Dementia using either ML or MS techniques. Finally, care should be taken when interpreting the reported accuracy of ML techniques, given studies' different contexts. © 2017 Dallora et al.This is an open access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.

    Download full text (pdf)
    fulltext
  • 41.
    Moraes, Ana Luiza Dallora
    et al.
    Blekinge Institute of Technology, Faculty of Engineering, Department of Health.
    Minku, Leandro
    University of Birmingham, GBR .
    Mendes, Emilia
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering. Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Rennemark, Mikael
    Linnaeus University, SWE.
    Anderberg, Peter
    Blekinge Institute of Technology, Faculty of Engineering, Department of Health.
    Sanmartin Berglund, Johan
    Blekinge Institute of Technology, Faculty of Engineering, Department of Health.
    Multifactorial 10-year prior diagnosis prediction model of dementia2020In: International Journal of Environmental Research and Public Health, ISSN 1661-7827, E-ISSN 1660-4601, Vol. 17, no 18, p. 1-18, article id 6674Article in journal (Refereed)
    Abstract [en]

    Dementia is a neurodegenerative disorder that affects the older adult population. To date, no cure or treatment to change its course is available. Since changes in the brains of affected individuals could be evidenced as early as 10 years before the onset of symptoms, prognosis research should consider this time frame. This study investigates a broad decision tree multifactorial approach for the prediction of dementia, considering 75 variables regarding demographic, social, lifestyle, medical history, biochemical tests, physical examination, psychological assessment and health instruments. Previous work on dementia prognoses with machine learning did not consider a broad range of factors in a large time frame. The proposed approach investigated predictive factors for dementia and possible prognostic subgroups. This study used data from the ongoing multipurpose Swedish National Study on Aging and Care, consisting of 726 subjects (91 presented dementia diagnosis in 10 years). The proposed approach achieved an AUC of 0.745 and Recall of 0.722 for the 10-year prognosis of dementia. Most of the variables selected by the tree are related to modifiable risk factors; physical strength was important across all ages. Also, there was a lack of variables related to health instruments routinely used for the dementia diagnosis. © 2020 by the authors. Licensee MDPI, Basel, Switzerland.

    Download full text (pdf)
    Multifactorial 10-year prior diagnosis prediction model of dementia
  • 42.
    Mourao, Erica
    et al.
    Fluminense Fed Univ, BRA.
    Kalinowski, Marcos
    Pontifical Catholic Univ Rio de Janeiro PUC Rio, BRA.
    Murta, Leonardo
    Fluminense Fed Univ, BRA.
    Mendes, Emilia
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering. Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Wohlin, Claes
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering. Blekinge Inst Technol, Karlskrona, Sweden..
    Investigating the Use of a Hybrid Search Strategy for Systematic Reviews2017In: 11TH ACM/IEEE INTERNATIONAL SYMPOSIUM ON EMPIRICAL SOFTWARE ENGINEERING AND MEASUREMENT (ESEM 2017), IEEE , 2017, p. 193-198Conference paper (Refereed)
    Abstract [en]

    [Background] Systematic Literature Reviews (SLRs) are one of the important pillars when employing an evidence-based paradigm in Software Engineering. To date most SLRs have been conducted using a search strategy involving several digital libraries. However, significant issues have been reported for digital libraries and applying such search strategy requires substantial effort. On the other hand, snowballing has recently arisen as a potentially more efficient alternative or complementary solution. Nevertheless, it requires a relevant seed set of papers. [Aims] This paper proposes and evaluates a hybrid search strategy combining searching in a specific digital library (Scopus) with backward and forward snowballing. [Method] The proposed hybrid strategy was applied to two previously published SLRs that adopted database searches. We investigate whether it is able to retrieve the same included papers with lower effort in terms of the number of analysed papers. The two selected SLRs relate respectively to elicitation techniques (not confined to Software Engineering (SE)) and to a specific SE topic on cost estimation. [Results] Our results provide preliminary support for the proposed hybrid search strategy as being suitable for SLRs investigating a specific research topic within the SE domain. Furthermore, it helps overcoming existing issues with using digital libraries in SE. [Conclusions] The hybrid search strategy provides competitive results, similar to using several digital libraries. However, further investigation is needed to evaluate the hybrid search strategy.

  • 43.
    Mourão, Erica
    et al.
    Fluminense Federal University, BRA.
    Pimentel, João Felipe N.
    Fluminense Federal University, BRA.
    Murta, Leonardo Gresta Paulino
    Fluminense Federal University, BRA.
    Kalinowski, Marcos
    Pontifical Catholic University of Rio de Janeiro (PUC-Rio), BRA.
    Mendes, Emilia
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering. Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Wohlin, Claes
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    On the performance of hybrid search strategies for systematic literature reviews in software engineering2020In: Information and Software Technology, ISSN 0950-5849, E-ISSN 1873-6025, Vol. 123, article id 106294Article in journal (Refereed)
    Abstract [en]

    Context: When conducting a Systematic Literature Review (SLR), researchers usually face the challenge of designing a search strategy that appropriately balances result quality and review effort. Using digital library (or database) searches or snowballing alone may not be enough to achieve high-quality results. On the other hand, using both digital library searches and snowballing together may increase the overall review effort. Objective: The goal of this research is to propose and evaluate hybrid search strategies that selectively combine database searches with snowballing. Method: We propose four hybrid search strategies combining database searches in digital libraries with iterative, parallel, or sequential backward and forward snowballing. We simulated the strategies over three existing SLRs in SE that adopted both database searches and snowballing. We compared the outcome of digital library searches, snowballing, and hybrid strategies using precision, recall, and F-measure to investigate the performance of each strategy. Results: Our results show that, for the analyzed SLRs, combining database searches from the Scopus digital library with parallel or sequential snowballing achieved the most appropriate balance of precision and recall. Conclusion: We put forward that, depending on the goals of the SLR and the available resources, using a hybrid search strategy involving a representative digital library and parallel or sequential snowballing tends to represent an appropriate alternative to be used when searching for evidence in SLRs. © 2020 Elsevier B.V.

  • 44.
    Oliveira, Edson
    et al.
    UFAM Univ Fed Amazonas, BRA.
    Conte, Tayana
    UFAM Univ Fed Amazonas, BRA.
    Cristo, Marco
    UFAM Univ Fed Amazonas, BRA.
    Mendes, Emilia
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering. Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science. Blekinge Inst Technol, Sch Comp, Karlskrona, Sweden..
    Software Project Managers' Perceptions of Productivity Factors: Findings from a Qualitative Study2016In: ESEM'16: PROCEEDINGS OF THE 10TH ACM/IEEE INTERNATIONAL SYMPOSIUM ON EMPIRICAL SOFTWARE ENGINEERING AND MEASUREMENT, ASSOC COMPUTING MACHINERY , 2016Conference paper (Refereed)
    Abstract [en]

    Context Developers' productivity plays an important role in software development organizations; however, in many cases the management of such human capital is mainly based on how project managers perceive productivity. Therefore, it is important to investigate what these perceptions are in practice. Goal This study's main goal is to understand project managers' perception regarding developers' productivity. Method We employed a qualitative research methodology using semi-structured interviews for data collection. We interviewed 12 managers from three software development organizations in the city of Manaus (Brazil). Results We identified that the managers' perceptions about developers' productivity are influenced by four different factors: (1) tasks delivered on time, (2) produced artifacts that do not need rework, (3) products that meet stakeholders' expectations, and (4) personal behavior such as focus and proactivity. Conclusions This qualitative study shows a perception of developers' productivity different from that presented in other research papers, and suggests that human factors play an important role in managers' perceptions about productivity. Future work will investigate how these perceptions concretely influence developers' productivity, and how they relate to the existing developers' productivity factors in the literature.

  • 45. Riaz, Mehwish
    et al.
    Mendes, Emilia
    Blekinge Institute of Technology, School of Computing.
    Tempero, Ewan
    Sulayman, Muhammad
    Using CBR and CART to predict maintainability of relational database-driven software applications2013Conference paper (Refereed)
    Abstract [en]

    Relational database-driven software applications have gained significant importance in modern software development. Given that software maintainability is an important quality attribute, predicting these applications' maintainability can provide various benefits to software organizations, such as adopting a defensive design and more informed resource management. Aims: The aim of this paper is to present the results from employing two well-known prediction techniques to estimate the maintainability of relational database-driven applications. Method: Case-based reasoning (CBR) and classification and regression trees (CART) were applied to data gathered on 56 software projects from software companies. The projects concerned development and/or maintenance of relational database-driven applications. Unlike previous studies, all variables (28 independent and 1 dependent) were measured on a 5-point bi-polar scale. Results: Results showed that CBR performed slightly better (at 76.8% correct predictions) in terms of prediction accuracy when compared to CART (67.8%). In addition, the two important predictors identified were documentation quality and understandability of the applications. Conclusions: The results show that CBR can be used by software companies to formalize and improve their process of maintainability prediction. Future work involves gathering more data and also employing other prediction techniques.

  • 46. Riaz, Mehwish
    et al.
    Tempero, Ewan
    Sulayman, Muhammad
    Mendes, Emilia
    Blekinge Institute of Technology, School of Computing.
    Maintainability Predictors For Relational Database-Driven Software Applications: Extended Results From A Survey2013In: International Journal of Software Engineering and Knowledge Engineering, ISSN 0218-1940 , Vol. 23, no 4, p. 507-522Article in journal (Refereed)
    Abstract [en]

    Software maintainability is a very important quality attribute. Its prediction for relational database-driven software applications can help organizations improve the maintainability of these applications. The research presented herein adopts a survey-based approach where a survey was conducted with 40 software professionals aimed at identifying and ranking the important maintainability predictors for relational database-driven software applications. The survey results were analyzed using frequency analysis. The results suggest that maintainability prediction for relational database-driven applications is not the same as that of traditional software applications in terms of the importance of the predictors used for this purpose. The results also provide a baseline for creating maintainability prediction models for relational database-driven software applications.

    Download full text (pdf)
    FULLTEXT01
  • 47.
    Rodriguez, Pilar
    et al.
    Oulun Yliopisto, FIN.
    Mendes, Emilia
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering. Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Turhan, Buran
    Oulun Yliopisto, FIN.
    Key Stakeholders' Value Propositions for Feature Selection in Software-intensive Products: An Industrial Case Study2020In: IEEE Transactions on Software Engineering, ISSN 0098-5589, E-ISSN 1939-3520, Vol. 46, no 12, p. 1340-1363Article in journal (Refereed)
    Abstract [en]

    Numerous software companies are adopting value-based decision making. However, what does value mean for key stakeholders making decisions? How do different stakeholder groups understand value? Without an explicit understanding of what value means, decisions are subject to ambiguity and vagueness, which are likely to bias them. This case study provides an in-depth analysis of key stakeholders' value propositions when selecting features for a large telecommunications company's software-intensive product. Stakeholder' value propositions were elicited via interviews, which were analyzed using Grounded Theory coding techniques (open and selective coding). Thirty-six value propositions were identified and classified into six dimensions: customer value, market competitiveness, economic value/profitability, cost efficiency, technology & architecture, and company strategy. Our results show that although propositions in the customer value dimension were those mentioned the most, the concept of value for feature selection encompasses a wide range of value propositions. Moreover, stakeholder groups focused on different and complementary value dimensions, calling to the importance of involving all key stakeholders in the decision making process. Although our results are particularly relevant to companies similar to the one described herein, they aim to generate a learning process on value-based feature selection for practitioners and researchers in general. IEEE

  • 48.
    Rodriguez, Pilar
    et al.
    Univ Oulu, FIN.
    Urquhart, Cathy
    Manchester Metropolitan Univ, GBR.
    Mendes, Emilia
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering. Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    A Theory of Value for Value-Based Feature Selection in Software Engineering2022In: IEEE Transactions on Software Engineering, ISSN 0098-5589, E-ISSN 1939-3520, Vol. 48, no 2, p. 466-484Article in journal (Refereed)
    Abstract [en]

    Value-Based Software Engineering stresses the role of value in software related decisions. In the context of feature selection, software features judged to provide higher value take priority in the development process. This paper focuses on what value means when selecting software features. Using grounded theory, we conducted and analyzed semi-structured interviews with 21 key stakeholders (decision-makers) from three software/software-intensive companies, within a context where value-based decision-making was already established. Our analysis led to the building of a theory of value for value-based feature selection that identifies the nature of value propositions considered by key stakeholders when selecting software features (i.e., decision-making criteria for deciding upon software features, as suggested by Boehm (2003)). We found that some value propositions were common to all three company cases (core value propositions), whereas others were dependent upon the context in which a company operates, and the characteristics of the product under development (specific value propositions). Moreover, value propositions vary according to the stakeholder group and the type of feature being assessed. Our study provides significant insight into value in the context of feature selection, and generates new concepts around value-based feature selection such as new value propositions.

  • 49. Salleh, Norsaremah
    et al.
    Mendes, Emilia
    Blekinge Institute of Technology, School of Computing.
    Grundy, John
    Investigating the effects of personality traits on pair programming in a higher education setting through a family of experiments2014In: Empirical Software Engineering, ISSN 1382-3256, E-ISSN 1573-7616, Vol. 19, no 3, p. 714-752Article in journal (Refereed)
    Abstract [en]

    Evidence from a systematic literature that we conducted previously revealed numerous inconsistencies in findings from the Pair Programming (PP) literature regarding the effects of personality on PP’s effectiveness. It also showed that, despite numerous investigations, the effect of differing personality traits of pairs on the successful implementation of pair-programming (PP) within a higher education setting is still unclear. In addition, our results also showed that the personality instrument used the most had been the Myers-Briggs Type Indicator (MBTI), despite being an indicator criticized by personality psychologists as unreliable in measuring an individual’s personality traits. These issues motivated our research, where we conducted a series of five formal experiments at the University of Auckland (between 2009 and 2010) using 594 undergraduate students as subjects to investigate the effects of personality composition on PP’s effectiveness. Our studies employed the Five-Factor personality framework, comprising five broad traits (Openness to experience, Conscientiousness, Extraversion, Agreeableness, and Neuroticism). Our experiments investigated three of the five traits - Conscientiousness, Neuroticism, and Openness. Our findings showed that Conscientiousness and Neuroticism did not present a statistically significantly effect upon paired students’ academic performance. However, Openness played a significant role in differentiating paired students’ academic performance. Participants survey results also indicated that PP not only caused an increase in satisfaction and confidence levels but also brought enjoyment to the tutorial classes and enhanced students’ motivation.

  • 50.
    Salleh, Norsaremah
    et al.
    International Islamic University, Malaysia.
    Mendes, Emilia
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Mendes, Fabiana
    University of Brasilia, Brazil.
    Dissanayake Lekamlage, Charitha
    Blekinge Institute of Technology. student.
    Petersen, Kai
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Value-based Software Engineering: A Systematic Mapping Study2023In: e-Informatica Software Engineering Journal, ISSN 1897-7979, E-ISSN 2084-4840, Vol. 17, no 1, article id 230106Article in journal (Refereed)
    Abstract [en]

    Background: Integrating value-oriented perspectives into the principles and practices of software engineering is fundamental to ensure that software development activities address key stakeholders’ views and also balance short-and long-term goals. This is put forward in the discipline of value-based software engineering (VBSE) Aim: This study aims to provide an overview of VBSE with respect to the research efforts that have been put into VBSE. Method: We conducted a systematic mapping study to classify evidence on value definitions, studies’ quality, VBSE principles and practices, research topics, methods, types, contribution facets, and publication venues. Results: From 143 studies we found that the term “value” has not been clearly defined in many studies. VB Requirements Engineering and VB Planning and Control were the two principles mostly investigated, whereas VB Risk Management and VB People Management were the least researched. Most studies showed very good reporting and relevance quality, acceptable credibility, but poor in rigour. Main research topic was Software Requirements and case study research was the method used the most. The majority of studies contribute towards methods and processes, while very few studies have proposed metrics and tools. Conclusion: We highlighted the research gaps and implications for research and practice to support VBSE. © 2023 The Authors.

    Download full text (pdf)
    fulltext
12 1 - 50 of 69
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf