Change search
Link to record
Permanent link

Direct link
BETA
Molléri, Jefferson Seide
Publications (10 of 11) Show all publications
Molléri, J. S., Petersen, K. & Mendes, E. (2019). CERSE - Catalog for empirical research in software engineering: A Systematic mapping study. Information and Software Technology, 105, 117-149
Open this publication in new window or tab >>CERSE - Catalog for empirical research in software engineering: A Systematic mapping study
2019 (English)In: Information and Software Technology, ISSN 0950-5849, E-ISSN 1873-6025, Vol. 105, p. 117-149Article in journal (Refereed) Published
Abstract [en]

Context Empirical research in software engineering contributes towards developing scientific knowledge in this field, which in turn is relevant to inform decision-making in industry. A number of empirical studies have been carried out to date in software engineering, and the need for guidelines for conducting and evaluating such research has been stressed. Objective: The main goal of this mapping study is to identify and summarize the body of knowledge on research guidelines, assessment instruments and knowledge organization systems on how to conduct and evaluate empirical research in software engineering. Method: A systematic mapping study employing manual search and snowballing techniques was carried out to identify the suitable papers. To build up the catalog, we extracted and categorized information provided by the identified papers. Results: The mapping study comprises a list of 341 methodological papers, classified according to research methods, research phases covered, and type of instrument provided. Later, we derived a brief explanatory review of the instruments provided for each of the research methods. Conclusion: We provide: an aggregated body of knowledge on the state of the art relating to guidelines, assessment instruments and knowledge organization systems for carrying out empirical software engineering research; an exemplary usage scenario that can be used to guide those carrying out such studies is also provided. Finally, we discuss the catalog's implications for research practice and the needs for further research. © 2018 Elsevier B.V.

Place, publisher, year, edition, pages
Elsevier B.V., 2019
Keywords
Empirical methods, Empirical research, Mapping study, Decision making, Mapping, Software engineering, Assessment instruments, Empirical method, Empirical research in software engineering, Empirical Software Engineering, Knowledge organization systems, Mapping studies, Systematic mapping studies, Knowledge management
National Category
Software Engineering
Identifiers
urn:nbn:se:bth-17108 (URN)10.1016/j.infsof.2018.08.008 (DOI)000452586900008 ()2-s2.0-85054061108 (Scopus ID)
Available from: 2018-10-11 Created: 2018-10-11 Last updated: 2019-04-24Bibliographically approved
Molléri, J. S., Nurdiani, I., Fotrousi, F. & Petersen, K. (2019). Experiences of studying attention through EEG in the context of review tasks. In: ACM International Conference Proceeding Series: . Paper presented at 23rd Evaluation and Assessment in Software Engineering Conference, EASE Copenhagen, 14 April 2019 through 17 April (pp. 313-318). Association for Computing Machinery
Open this publication in new window or tab >>Experiences of studying attention through EEG in the context of review tasks
2019 (English)In: ACM International Conference Proceeding Series, Association for Computing Machinery , 2019, p. 313-318Conference paper, Published paper (Refereed)
Abstract [en]

Context: Electroencephalograms (EEG) have been used in a few cases in the context of software engineering (SE). EEGs allow capturing emotions and cognitive functioning. Such human factors have already shown to be important to understand software engineering tasks. Therefore, it is essential to gain experience in the community to utilize EEG as a research tool. Objective: To report experiences of using EEG in the context of a software engineering education (review of master theses proposals). We provide our reflections and lessons learned of (1) how to plan an EEG study, (2) how to conduct and execute (e.g., tools), (3) how to analyze. Method: We carried out an experiment using an EEG headset to measure the participants’ attention rate. The experiment task includes reviewing three master thesis project plans. Results: We describe how we evolved our understanding of experimentation practices to collect and analyze psychological and cognitive data. We also provide a set of lessons learned regarding the application of EEG technology for research. Conclusions: We believe that that EEG could benefit software engineering research to collect cognitive information under certain conditions. The lessons learned reported here should be used as inputs for future experiments in software engineering, where human aspects are of interest. © 2019 Association for Computing Machinery.

Place, publisher, year, edition, pages
Association for Computing Machinery, 2019
Keywords
Attention, Electroencephalogram, Experiment, Human subjects, Bioelectric phenomena, Electroencephalography, Engineering research, Experiments, Cognitive information, Electro-encephalogram (EEG), Human aspects, Project plans, Research tools, Software engineering
National Category
Software Engineering
Identifiers
urn:nbn:se:bth-17890 (URN)10.1145/3319008.3319357 (DOI)2-s2.0-85064765914 (Scopus ID)9781450371452 (ISBN)
Conference
23rd Evaluation and Assessment in Software Engineering Conference, EASE Copenhagen, 14 April 2019 through 17 April
Available from: 2019-05-21 Created: 2019-05-21 Last updated: 2019-05-21Bibliographically approved
Molléri, J. S. (2019). Views of Research Quality in Empirical Software Engineering. (Doctoral dissertation). Karlskrona: Blekinge Tekniska Högskola
Open this publication in new window or tab >>Views of Research Quality in Empirical Software Engineering
2019 (English)Doctoral thesis, comprehensive summary (Other academic)
Abstract [en]

Background. Software Engineering (SE) research, like other applied disciplines, intends to provide trustful evidence to practice. To ensure trustful evidence, a rigorous research process based on sound research methodologies is required. Further, to be practically relevant, researchers rely on identifying original research problems that are of interest to industry; and the research must fulfill various quality standards that form the basis for the evaluation of the empirical research in SE. A dialogue and shared view of quality standards for research practice is still to be achieved within the research community.

 Objectives. The main objective of this thesis is to foster dialogue and capture different views of SE researchers on method level (e.g., through the identification and reasoning on the importance of quality characteristics for experiments, surveys and case studies) as well as general quality standards for Empirical Software Engineering (ESE). Given the views of research quality, a second objective is to understand how to operationalize, i.e. build and validate instruments to assess research quality. 

Method. The thesis makes use of a mixed method approach of both qualitative and quantitative nature. The research methods used were case studies, surveys, and focus groups. A range of data collection methods has been employed, such as literature review, questionnaires, and semi-structured workshops. To analyze the data, we utilized content and thematic analysis, descriptive and inferential statistics.

Results. We draw two distinct views of research quality. Through a top-down approach, we assessed and evolved a conceptual model of research quality within the ESE research community. Through a bottom-up approach, we built a checklist instrument for assessing survey-based research grounded on supporting literature and evaluated ours and others’ checklists in research practice and research education contexts.

Conclusion. The quality standards we identified and operationalized support and extend the current understanding of research quality for SE research. This is a preliminary, but still vital, step towards a shared understanding and view of research quality for ESE research. Further steps are needed to gain a shared understanding of research quality within the community. 

Place, publisher, year, edition, pages
Karlskrona: Blekinge Tekniska Högskola, 2019
Series
Blekinge Institute of Technology Doctoral Dissertation Series, ISSN 1653-2090 ; 7
Keywords
Research Quality, Quality Standards, Empirical Software Engineering, Research Methodology
National Category
Software Engineering
Identifiers
urn:nbn:se:bth-17648 (URN)978-91-7295-372-7 (ISBN)
Public defence
2019-06-14, J1650, Campus Gräsvik, Karlskrona, 13:00 (English)
Opponent
Supervisors
Available from: 2019-03-05 Created: 2019-02-27 Last updated: 2019-05-09Bibliographically approved
Molléri, J. S., Gonzalez-Huerta, J. & Henningsson, K. (2018). A legacy game for project management in software engineering courses. In: ACM International Conference Proceeding Series: . Paper presented at 3rd European Conference of Software Engineering Education, ECSEE, Seeon Monastery, Seeon, Germany (pp. 72-76). Association for Computing Machinery
Open this publication in new window or tab >>A legacy game for project management in software engineering courses
2018 (English)In: ACM International Conference Proceeding Series, Association for Computing Machinery , 2018, p. 72-76Conference paper, Published paper (Refereed)
Abstract [en]

Background: Software project management courses are becoming popular for teaching software engineering process models and methods. However, in order to be effective, this approach should be properly aligned to the learning outcomes. Common misalignments are caused by using a correct degree of realism or an appropriate instruction level. Objective: To foster students to acquire knowledge (theoretical and practical) that enables them solving similar challenges to the ones they will face in real-world software projects. Methods: We prototype and validate a legacy game that simulates the software development process. Students are required to plan and manage a software project according to its specification provided by the teachers. Teachers act as both customers and moderators, presenting the challenges and guiding the students' teamwork. Results: Both students' and teachers' perception suggest that the proposed game has potential to motivate the knowledge acquisition through problem-solving. The feedback also suggests that some measures must be taken to ensure the pedagogical alignment and a fair game. Conclusion: The lessons learned provide suggestions for adopting this or similar games in the context of project courses. As further work, we plan to describe and extend the game rules based on the results of this application. © 2018 Association for Computing Machinery.

Place, publisher, year, edition, pages
Association for Computing Machinery, 2018
Keywords
Education, Gaming, Project Management Course, Software Development Process, Curricula, Education computing, Engineering education, Problem solving, Project management, Software design, Software prototyping, Students, Instruction-level, Learning outcome, Software engineering course, Software project management, Teachers' perceptions, Teaching software, Teaching
National Category
Software Engineering
Identifiers
urn:nbn:se:bth-16891 (URN)10.1145/3209087.3209094 (DOI)000478670000011 ()2-s2.0-85049870717 (Scopus ID)9781450363839 (ISBN)
Conference
3rd European Conference of Software Engineering Education, ECSEE, Seeon Monastery, Seeon, Germany
Available from: 2018-08-20 Created: 2018-08-20 Last updated: 2019-09-10Bibliographically approved
Molléri, J. S., Ali, N. b., Petersen, K., Minhas, T. N. & Chatzipetrou, P. (2018). Teaching students critical appraisal of scientific literature using checklists. In: PROCEEDINGS OF THE 3RD EUROPEAN CONFERENCE OF SOFTWARE ENGINEERING EDUCATION (ECSEE): . Paper presented at 3rd European Conference of Software Engineering Education, ECSEE, Seeon Monastery, Germany (pp. 8-17). Association for Computing Machinery
Open this publication in new window or tab >>Teaching students critical appraisal of scientific literature using checklists
Show others...
2018 (English)In: PROCEEDINGS OF THE 3RD EUROPEAN CONFERENCE OF SOFTWARE ENGINEERING EDUCATION (ECSEE), Association for Computing Machinery , 2018, p. 8-17Conference paper, Published paper (Refereed)
Abstract [en]

Background: Teaching students to critically appraise scientific literature is an important goal for a postgraduate research methods course. Objective: To investigate the application of checklists for assessing the scientific rigor of empirical studies support students in reviewing case study research and experiments. Methods:We employed an experimental design where 76 students (in pairs) used two checklists to evaluate two papers (reporting a case study and an experiment) each. We compared the students' assessments against ratings from more senior researchers. We also collected data on students' perception of using the checklists. Results: The consistency of students' ratings and the accuracy when compared to ratings from seniors varied. A factor seemed to be that the clearer the reporting, the easier it is for students to judge the quality of studies. Students perceived checklist items related to data analysis as difficult to assess. Conclusion: As expected, this study reinforces the needs for clear reporting, as it is important that authors write to enable synthesis and quality assessment. With clearer reporting, the novices performed well in assessing the quality of the empirical work, which supports its continued use in the course as means for introducing scientific reviews. © 2018 Association for Computing Machinery.

Place, publisher, year, edition, pages
Association for Computing Machinery, 2018
Keywords
Case study, Checklist, Critical appraisal, Experiment, Student, Design of experiments, Engineering education, Experiments, Software engineering, Teaching, Case study research, Continued use, Empirical studies, Post-graduate research, Quality assessment, Scientific literature, Students
National Category
Software Engineering
Identifiers
urn:nbn:se:bth-16892 (URN)10.1145/3209087.3209099 (DOI)000478670000002 ()2-s2.0-85049867400 (Scopus ID)9781450363839 (ISBN)
Conference
3rd European Conference of Software Engineering Education, ECSEE, Seeon Monastery, Germany
Available from: 2018-08-20 Created: 2018-08-20 Last updated: 2019-09-10Bibliographically approved
Molléri, J. S., Petersen, K. & Mendes, E. (2018). Towards understanding the relation between citations and research quality in software engineering studies. Scientometrics
Open this publication in new window or tab >>Towards understanding the relation between citations and research quality in software engineering studies
2018 (English)In: Scientometrics, ISSN 0138-9130, E-ISSN 1588-2861Article in journal (Refereed) Epub ahead of print
Abstract [en]

The importance of achieving high quality in research practice has been highlighted in different disciplines. At the same time, citations are utilized to measure the impact of academic researchers and institutions. One open question is whether the quality in the reporting of research is related to scientific impact, which would be desired. In this exploratory study we aim to: (1) Investigate how consistently a scoring rubric for rigor and relevance has been used to assess research quality of software engineering studies; (2) Explore the relationship between rigor, relevance and citation count. Through backward snowball sampling we identified 718 primary studies assessed through the scoring rubric. We utilized cluster analysis and conditional inference tree to explore the relationship between quality in the reporting of research (represented by rigor and relevance) and scientiometrics (represented by normalized citations). The results show that only rigor is related to studies’ normalized citations. Besides that, confounding factors are likely to influence the number of citations. The results also suggest that the scoring rubric is not applied the same way by all studies, and one of the likely reasons is because it was found to be too abstract and in need to be further refined. Our findings could be used as a basis to further understand the relation between the quality in the reporting of research and scientific impact, and foster new discussions on how to fairly acknowledge studies for performing well with respect to the emphasized research quality. Furthermore, we highlighted the need to further improve the scoring rubric. © 2018, The Author(s).

Place, publisher, year, edition, pages
Springer Netherlands, 2018
Keywords
Conditional inference tree, Empirical software engineering, Exploratory study, Reporting of research, Research practice, Scientific impact
National Category
Software Engineering
Identifiers
urn:nbn:se:bth-17086 (URN)10.1007/s11192-018-2907-3 (DOI)2-s2.0-85053837175 (Scopus ID)
Available from: 2018-10-05 Created: 2018-10-05 Last updated: 2019-04-24Bibliographically approved
Marculescu, B., Jabbari, R. & Molléri, J. S. (2016). Perception of Scientific Evidence: Do Industry and Academia Share an Understanding?. In: : . Paper presented at Lärarlärdom 2016, Kristianstad.
Open this publication in new window or tab >>Perception of Scientific Evidence: Do Industry and Academia Share an Understanding?
2016 (English)Conference paper, Published paper (Refereed)
Abstract [en]

Context: Collaboration depends on communication and upon having a similar understanding of the notions that are being discussed, and a similar appraisal of their value. Existing work seems to show that the collaboration between industry and academia is hampered by a difference in values. In particular, academic work focuses more on generalizing on the basis of existing evidence, while industry prefers to particularize conclusions to individual cases. This has lead to the conclusion that industry values scientific evidence less than academia does. 

Objective: This paper seeks to re-evaluate that conclusion, and investigate if industry and academia share a definition of scientific evidence. If evidence can be found of competing views, we propose a more finely grained model of empirical evidence and its role in building software engineering knowledge. Moreover, we seek to determine if a more nuanced look the notion of scientific evidence has an influence on how academics and industry practitioners perceive that notion. 

Method: We have developed a model of key concepts related to understanding empirical evidence in software engineering. An initial validation has been conducted, consisting of a survey of master students, to determine if competing views of evidence exist at that level. The model will be validated by further literature study and semistructured interviews with industry practitioners. 

Results: We propose a model of empirical evidence in software engineering, and initial validation of that model by means of a survey. The results of the survey indicate that conflicting opinions already exist in the student body regarding the notion of evidence, how trustworthy different sources of evidence and knowledge are, and which sources of evidence and types of evidence are more appropriate in various situations.

Conclusion: Rather than a difference in how industry and academia value scientific evidence, we see evidence of misunderstanding, of different notions of what constitutes scientific evidence and what strength of evidence is required to achieve specific goals. We propose a model of empirical evidence, to provide a better understanding of what is required in various situations and a better platform for communication between industry and academia.

National Category
Software Engineering
Identifiers
urn:nbn:se:bth-17647 (URN)
Conference
Lärarlärdom 2016, Kristianstad
Available from: 2019-02-27 Created: 2019-02-27 Last updated: 2019-03-08Bibliographically approved
Molléri, J. S. & Benitti, F. B. (2015). SESRA: a web-based automated tool to support the systematic literature review process. In: Proceedings of the 19th International Conference on Evaluation and Assessment in Software Engineering: . Paper presented at 19th International Conference on Evaluation and Assessment in Software Engineering,Nanjing. ACM Digital Library
Open this publication in new window or tab >>SESRA: a web-based automated tool to support the systematic literature review process
2015 (English)In: Proceedings of the 19th International Conference on Evaluation and Assessment in Software Engineering, ACM Digital Library, 2015Conference paper, Published paper (Refereed)
Place, publisher, year, edition, pages
ACM Digital Library, 2015
Keywords
systematic literature review; SLR; automated tool
National Category
Software Engineering
Identifiers
urn:nbn:se:bth-11190 (URN)10.1145/2745802.2745825 (DOI)978-1-4503-3350-4 (ISBN)
Conference
19th International Conference on Evaluation and Assessment in Software Engineering,Nanjing
Available from: 2015-12-11 Created: 2015-12-11 Last updated: 2018-01-10Bibliographically approved
Molléri, J. S., Mendes, E., Petersen, K. & Felderer, M. Aligning the Views of Research Quality in Empirical Software Engineering. ACM Transactions on Software Engineering and Methodology
Open this publication in new window or tab >>Aligning the Views of Research Quality in Empirical Software Engineering
(English)In: ACM Transactions on Software Engineering and Methodology, ISSN 1049-331X, E-ISSN 1557-7392Article in journal (Refereed) Submitted
Abstract [en]

Context: Research quality is intended to assess the design and reporting of studies. It comprises a series of concepts such as methodological rigor, practical relevance, and conformance to ethical standards. Depending on the perspective, different views of importance are given to the conceptual dimensions of research quality.

Objective: We intend to assess the level of alignment between researchers with regard to a conceptual model of research quality. This includes aligning the definition of research quality and reasoning on the relative importance of quality characteristics.

Method: We conducted a mixed methods approach comprising an internal case study and a complementary focus group. We carried out a hierarchical voting prioritization based on the conceptual model to collect relative values for importance. In the focus group, we also moderate discussions with experts to address potential misalignment.

Results: The alignment at the research group level was higher compared to that at community level. Moreover, the interdisciplinary conceptual quality model was seeing to express fairly the quality of research, but presented limitations regarding its structure and components' description, which resulted in an updated model. 

Conclusion: The interdisciplinary model used was suitable for the software engineering context. The process used for reflecting on the alignment of quality with respect to definitions and priorities was working well. 

Keywords
Research Quality, Alignment, Mixed Method, Case Study, Focus Group
National Category
Software Engineering
Identifiers
urn:nbn:se:bth-17745 (URN)
Available from: 2019-03-28 Created: 2019-03-28 Last updated: 2019-04-24Bibliographically approved
Molléri, J. S., Petersen, K. & Mendes, E. An Empirically Evaluated Checklist for Surveys in Software Engineering. Information and Software Technology
Open this publication in new window or tab >>An Empirically Evaluated Checklist for Surveys in Software Engineering
(English)In: Information and Software Technology, ISSN 0950-5849, E-ISSN 1873-6025Article in journal (Refereed) Submitted
Abstract [en]

Context: Over the past decade Software Engineering research has seen a steady increase in survey-based studies, and there are several guidelines providing support for those willing to carry out surveys. The need for auditing survey research has been raised in the literature. Checklists have been used to assess different types of empirical studies, such as experiments and case studies.

Objective: This paper proposes a checklist to support the design and assessment of survey-based research in software engineering grounded in existing guidelines for survey research. We further evaluated the checklist in the research practice context.

Method: To construct the checklist, we systematically aggregated knowledge from 12 methodological studies supporting survey-based research in software engineering. We identified the key stages of the survey process and its recommended practices through thematic analysis and vote counting. To improve our initially designed checklist we evaluated it using a mixed evaluation approach involving experienced researchers.

Results: The evaluation provided insights regarding the limitations of the checklist in relation to its understanding and objectivity. In particular, 19 of the 38 checklist items were improved according to the feedback received from its evaluation. Finally, a discussion on how to use the checklist and what its implications are for research practice is also provided.

Conclusion: The proposed checklist is an instrument suitable for auditing survey reports as well as a support tool to guide ongoing research with regard to the survey design process.

Keywords
Checklist, Assessment, Survey, Methodology
National Category
Software Engineering
Identifiers
urn:nbn:se:bth-17645 (URN)
Available from: 2019-02-27 Created: 2019-02-27 Last updated: 2019-04-24Bibliographically approved
Organisations

Search in DiVA

Show all publications