Endre søk
Begrens søket
78910111213 451 - 500 of 608
RefereraExporteraLink til resultatlisten
Permanent link
Referera
Referensformat
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Annet format
Fler format
Språk
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Annet språk
Fler språk
Utmatningsformat
  • html
  • text
  • asciidoc
  • rtf
Treff pr side
  • 5
  • 10
  • 20
  • 50
  • 100
  • 250
Sortering
  • Standard (Relevans)
  • Forfatter A-Ø
  • Forfatter Ø-A
  • Tittel A-Ø
  • Tittel Ø-A
  • Type publikasjon A-Ø
  • Type publikasjon Ø-A
  • Eldste først
  • Nyeste først
  • Skapad (Eldste først)
  • Skapad (Nyeste først)
  • Senast uppdaterad (Eldste først)
  • Senast uppdaterad (Nyeste først)
  • Disputationsdatum (tidligste først)
  • Disputationsdatum (siste først)
  • Standard (Relevans)
  • Forfatter A-Ø
  • Forfatter Ø-A
  • Tittel A-Ø
  • Tittel Ø-A
  • Type publikasjon A-Ø
  • Type publikasjon Ø-A
  • Eldste først
  • Nyeste først
  • Skapad (Eldste først)
  • Skapad (Nyeste først)
  • Senast uppdaterad (Eldste først)
  • Senast uppdaterad (Nyeste først)
  • Disputationsdatum (tidligste først)
  • Disputationsdatum (siste først)
Merk
Maxantalet träffar du kan exportera från sökgränssnittet är 250. Vid större uttag använd dig av utsökningar.
  • 451.
    Rahman, Md. Shoaib
    et al.
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för programvaruteknik.
    Das, Arijit
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för programvaruteknik.
    MITIGATION APPROACHES FOR COMMON ISSUES AND CHALLENGES WHEN USING SCRUM IN GLOBAL SOFTWARE DEVELOPMENT2015Independent thesis Advanced level (degree of Master (Two Years))Oppgave
    Abstract [en]

    Context. Distributed software development teams frequently faced several issues from communication, co-ordination and control aspects. Teams are facing these issues because between teams there is socio-cultural distance, geographical distance and temporal diatance. So, the purpose of the study is to find out the acts when distributed Scrum teams face the problems. Objectives. There are several numbers of common GSD challenges or issues exist; such as, face to face meetings difficult, increase co-ordination costs and difficult to convey vision & strategy so on. The purpose of this study was to search, additional frequently occurred Global Software Development (GSD) issues or challenges. As well as, to find out the mitigation strategies, those practices by the Scrum practitioners (distributed software environment) in the industry. Methods. In this study, systematic literature review and scientific interview with distributed Scrum practitioners were conducted for empirical validation. One of the purpose for interview was to get challenges & mitigations from distributed Scrum practitioners point of view; as well as, verifying the literature review’s outcomes. Basically, we have extended the Hossain, Babar et al.’s [1] literature review and followed the similar procedures. Research papers were selected from the following sources, such as, IEEEXplore, ACM Digital library, Google Scholar, Compendex EI, Wiley InterSciene, Elsevier Science Direct, AIS eLibrary, SpringerLink. In addition, interviews were conducted from the persons who have at least six months working experience in a distributed Scrum team. Moreover, to analyze the interviews thematic analysis method has been followed. Results. Three additional common GSD challenges and four new mitigation strategies are found. Among the additional issues, one of them is communication issues (i.e. lack of trust/teamness or interpersonal relationship) and rest of them are co-ordination issues (i.e. lack domain knowledge/ lack of visibility and skill difference and technical issues). However, additional mitigation strategies are synchronizing works, preparation meeting, training and work status monitoring. Finally, frequently faced GSD issues are mapped with mitigation strategies by basing on the results obtained from SLR and interviews. Conclusions. Finally, we have got three additional GSD issues (such as, lack of trust/ teamness/ interpersonal relationship, lack of visibility/ lack of knowledge and difference in skills & technical issues) with the existing twelve common communication, co-ordination and control issues. The mitigation techniques (such as, synchronized works hour, ICT mediated synchronous communication and visit so on) for the common GSD issues has been found out and validated by Scrum practitioners. Among the existing issues, several of them use new mitigation strategies, those were gotten from practitioners. Moreover, for the two existing control issues (i.e. management of project artifacts may be subject to delays; managers must adapt to local regulations) lessening or mitigation techniques have been addressed by interviewees. This study was carried out to get the common GSD issues & mitigations from literature and distributed Scrum practitioners.

  • 452.
    Rapp, Carl
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för programvaruteknik.
    Gamification as a tool to encourage eco-driving2016Independent thesis Advanced level (degree of Master (Two Years)), 20 poäng / 30 hpOppgave
    Abstract [en]

    Context: In this work a system, the eco service, is developed that incorporates elements from gamification to help drivers adapt to a more energy-efficient driving style. An energy-efficient driving style can help reduce fuel consumption, increase traffic safety and help reduce the emissions made from vehicles.

    Objectives: The main goal of this work is to explore ways of how gamification can be used in the context of eco-driving. Evaluating different elements and how they work in this context is important to help drivers to continue improving their driving style.

    Method: The eco service was tested on 16 participants where each participants was asked to drive a predetermined route. During the experiment the participants were given access to the eco service in order to gain feedback on their driving. Lastly interviews were held with each participant on questions regarding the use of gamification and how it can be improved in the context of eco-driving. The research was done in collaboration with a swedish company, Swedspot AB, that works with software solutions for connected vehicles.

    Results & Conclusions: Positive results were found on the use of gamification. Participants reported that the eco service made them more aware of their driving situation and how to improve. Game elements with positive influence were reward and competitive based and helped motivate the driver to improve.

  • 453.
    Razzak, Mohammad Abdur
    et al.
    Daffodil Int Univ, BGD.
    Šmite, Darja
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för programvaruteknik.
    Knowledge Management in Globally Distributed Agile Projects-Lesson Learned2015Inngår i: 2015 IEEE 10TH INTERNATIONAL CONFERENCE ON GLOBAL SOFTWARE ENGINEERING (ICGSE 2015), IEEE , 2015, s. 81-89Konferansepaper (Fagfellevurdert)
    Abstract [en]

    Knowledge management (KM) is essential for success in any software project, but especially in global software development where team members are separated by time and space. Software organizations are managing knowledge in various ways to increase transparency and improve software team performance. One way to classify these strategies is proposed by Earl who defined seven knowledge management schools. The objective of this research is to study knowledge creation and sharing practices in a number of distributed agile projects, map these practices to the knowledge management strategies and determine which strategies are most common, which are applied only locally and which are applied globally. This is done by conducting a series of semi-structured qualitative interviews over a period of time span during May, 2012-June, 2013. Our results suggest that knowledge sharing across remote locations in distributed agile projects heavily relies on knowledge codification, i.e. technocratic KM strategies, even when the same knowledge is shared tacitly within the same location, i.e. through behavioral KM strategies.

  • 454. Razzak, Mohammad Abdur
    et al.
    Šmite, Darja
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för programvaruteknik.
    Ahmed, Rajib
    Spatial knowledge creation and sharing activities in a distributed agile project2013Konferansepaper (Fagfellevurdert)
    Abstract [en]

    Knowledge management (KM) is key to the success of any software organization. KM in software development has been the center of attention for researchers due to its potential to improve productivity. However, the knowledge is not only stored in repositories but is also shared in the office space. Agile software development teams use the benefits of shared space to foster knowledge creation. But it is difficult to create and share this type of knowledge, when team members are distributed. This participatory single-case study indicates that, distributed team members rely heavily on knowledge codification and application of tools for knowledge sharing. We have found that, the studied project did not use any specific software or hardware that would enable spatial knowledge creation and sharing. Therefore parts of the knowledge items not codified were destined to be unavailable for remote team members.

  • 455.
    Reddy, Sri Sai Vijay Raj
    et al.
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för programvaruteknik.
    Nekkanti, Harini
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för programvaruteknik.
    Surveys in Software Engineering: A Systematic Literature Review and Interview Study2016Independent thesis Advanced level (degree of Master (Two Years)), 20 poäng / 30 hpOppgave
    Abstract [en]

    Context: The need for empirical investigations in software engineering domain is growing immensely. Many researchers nowadays, conduct and validate their study using empirical evidences. Survey is one such empirical investigation method which enables researchers to collect data from the large population. Main aim of the survey is to generalize the findings. Many problems are faced by the researchers in the survey process. Survey outcomes also depend upon variables like sample size, response rate and analysis techniques. Hence there is need for the literature addressing all the possible problems faced and also the impact of survey variables on outcomes.

    Objectives: Firstly, to identify the common problems faced by the researchers from the existing literature and also analyze the impact of the survey variables. Secondly to collect the experiences of software engineering researchers regarding the problems faced and the survey variables. Finally come up with a checklist of all the problems and mitigation strategies along with the information about the impact of survey variables.

    Methods: Initially a systematic literature review was conducted, to identify the existing problems in the literature and to know the effect of response rate, sample size, analysis techniques on survey outcomes. Then systematic literature review results were validated by conducting semi-structured, faceto-face interviews with software engineering researchers.

    Results: We were successful in providing a checklist of problems along with their mitigation strategies. The survey variables dependency on type of research, researcher’s choices limited us from further analyzing their impact on survey outcomes. The face-to-face interviews with software engineering researchers provided validations to our research results.

    Conclusions: This research gave us deeper insights into the survey methodology. It helped us to explore the differences that exists between the state of art and state of practice towards problem mitigation in survey process.

  • 456.
    Rehman, Zia ur
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för programvaruteknik.
    Overcoming Challenges of Requirements Elicitation in Offshore Software Development Projects2014Independent thesis Advanced level (degree of Master (Two Years))Oppgave
    Abstract [en]

    Context. Global Software Development (GSD) is the plan of action in which software development is performed under temporal, political, organizational and cultural boundaries. Offshore outsourced software development is the part of GSD, which refers to the transfer of certain software development activities to an external organization in another country. The primary factors driving offshore outsourced software development are low cost, access to a large pool of skilled laborers, increased productivity, high quality, market access and short development cycle. Requirements engineering (RE) and especially requirements elicitation is highly affected by the geographical distribution and multitude of stakeholders. Objectives. The goal of conducting this study is to explore the challenges and solutions associated with requirements elicitation phase during offshore software projects, both in research literature and in industrial practice. Moreover, this study examines that which of the challenges and practices reported in literature can be seen in industrial practice. This helped in finding out the similarities and differences between the state of art and state of practice. Methods. Data collection process has been done through systematic literature review (SLR) and web survey. SLR has been conducted using guidelines of Kitchenham and Charters. During SLR, The studies have been identified from the most reliable and authentic databases such as Compendex, Inspec (Engineering village) and Scopus. In the 2nd phase, survey has been conducted with 391 practitioners from various organizations involved in GSD projects. In the 3rd phase, qualitative comparative analysis has been applied as an analysis method. Results. In total 10 challenges and 45 solutions have been identified from SLR and survey. Through SLR, 8 challenges and 22 solutions have been identified. While through industrial survey, 2 additional challenges and 23 additional solutions have been identified. By analyzing the frequency of challenges, the most compelling challenges are communication, control and socio-cultural issues. Conclusions. The comparison between theory and practice explored the most compelling challenges and their associated solutions. It is concluded that socio-cultural awareness and proper communication between client and supplier organization’s personnel is paramount for successful requirements elicitation. The scarcity of research literature in this area suggests that more work needs to be done to explore some strategies to mitigate the impact of additional 2 challenges revealed through survey.

  • 457.
    Ren, Mingyu
    et al.
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för programvaruteknik.
    Dong, Zhipeng
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för programvaruteknik.
    What do we know about Testing practices in Software Startups?2017Independent thesis Advanced level (degree of Master (Two Years)), 20 poäng / 30 hpOppgave
    Abstract [en]

    Context. With the rapid development of the software industry, innovative software products become the mainstream of the software market. Because software startups can use a few resources to quickly produce and publish innovative software products, more and more software startups are launched. Software testing is important to ensure the quality of product in software companies. Software testing is costly in software development, but if software testing is avoided, it could be costlier. Many different regular software companies spend up to 40-50% of development efforts on software testing [1] [2]. Compared with other regular software companies, time and money are finite and need to be allocated reasonably in software startups. Unreasonable allocation of time and money could lead to the failure of software startups. We don’t know how much software startups spend for testing, and few research studies have investigated the testing practices in software startups. Therefore, we decided to conduct an exploratory study to know about the testing practices in software startups.

    Objectives. The aim of the research is to investigate testing practices in software startups. In this study, we investigate software startups’ structure and how to manage their test team. The test processes and test techniques used in software startups have been researched. And the main testing challenges in software startups have been investigated as well.

    Methods. We mainly conducted a qualitative research for the study. We selected literature review and survey as the research method. The literature review method is used to get in-depth understanding of software testing practices in software companies. Survey is used to answer our research questions. We used interview as our data collection method. And in order to analyze data from interviews, we selected descriptive statistics method.

    Results. A total of 13 responses were obtained through interviews from 9 software startups. We got results from 9 investigated software startups to structure and manage their test teams. We analyzed the common steps of test processes and classified the techniques they used in the 9 software startups. At last, we analyzed and listed the main testing challenges that are occurred in the 9 software startups. Conclusions. The research objectives are fulfilled. The research questions have been answered. We got the conclusion based on 9 software startups. The 9 companies cannot represent all software startups, but we can know about test practices in software startups initially through the 13 interviews. We also found some differences about testing practice between 9 software startups and regular software companies. Our study is a primary research to explore testing practices in 9 software startups, we provided some data and analysis results of the 9 companies to the researchers who want to research some related area. In addition, our research could help someone who plans to set up a software company. They can use the data we collected to think about the testing practice in their own company. Then find out the best way to prevent and resolve the problem in testing. 

  • 458.
    Rodriguez, Pilar
    et al.
    Oulun Yliopisto, FIN.
    Mendes, Emilia
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik. Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för programvaruteknik.
    Turhan, Buran
    Oulun Yliopisto, FIN.
    Key Stakeholders' Value Propositions for Feature Selection in Software-intensive Products: An Industrial Case Study2018Inngår i: IEEE Transactions on Software Engineering, ISSN 0098-5589, E-ISSN 1939-3520Artikkel i tidsskrift (Fagfellevurdert)
    Abstract [en]

    Numerous software companies are adopting value-based decision making. However, what does value mean for key stakeholders making decisions? How do different stakeholder groups understand value? Without an explicit understanding of what value means, decisions are subject to ambiguity and vagueness, which are likely to bias them. This case study provides an in-depth analysis of key stakeholders' value propositions when selecting features for a large telecommunications company's software-intensive product. Stakeholder' value propositions were elicited via interviews, which were analyzed using Grounded Theory coding techniques (open and selective coding). Thirty-six value propositions were identified and classified into six dimensions: customer value, market competitiveness, economic value/profitability, cost efficiency, technology & architecture, and company strategy. Our results show that although propositions in the customer value dimension were those mentioned the most, the concept of value for feature selection encompasses a wide range of value propositions. Moreover, stakeholder groups focused on different and complementary value dimensions, calling to the importance of involving all key stakeholders in the decision making process. Although our results are particularly relevant to companies similar to the one described herein, they aim to generate a learning process on value-based feature selection for practitioners and researchers in general. IEEE

  • 459.
    Sablis, Aivars
    et al.
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för programvaruteknik.
    Šmite, Darja
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för programvaruteknik.
    Moe, Nils Brede
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för programvaruteknik.
    Exploring cross-site networking in large-scale distributed projects2018Inngår i: Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), Springer Verlag , 2018, Vol. 11271, s. 318-333Konferansepaper (Fagfellevurdert)
    Abstract [en]

    Context: Networking in a distributed large-scale project is complex because of many reasons: time zone problems can make it challenging to reach remote contacts, teams rarely meet face-to-face which means that remote project members are often unfamiliar with each other, and applying activities for growing the network across sites is also challenging. At the same time, networking is one of the primary ways to share and receive knowledge and information important for developing software tasks and coordinating project activities. Objective: The purpose of this paper is to explore the actual networks of teams working in large-scale distributed software development projects and project characteristics that might impact their need for networking. Method: We conducted a multi-case study with three project cases in two companies, with software development teams as embedded units of analysis. We organized 20 individual interviews to characterize the development projects and surveyed 96 members from the total of 14 teams to draw the actual teams networks. Results: Our results show that teams in large-scale projects network in order to acquire knowledge from experts, and to coordinate tasks with other teams. We also learned that regardless of project characteristics, networking between sites in distributed projects is relatively low. Conclusions: Our study emphasizes the importance of networking. Therefore, we suggest that similar companies should pay extra attention for cultivating a networking culture in the large to strengthen their cross-site communication. © Springer Nature Switzerland AG 2018.

  • 460.
    Sadowska, Małgorzata
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för programvaruteknik.
    Quality of business models expressed in BPMN2013Independent thesis Advanced level (degree of Master (Two Years))Oppgave
    Abstract [en]

    Context. The quality of business process models is important in the area of model-based software development. The overall goal of this study was to develop and evaluate a model for assessing the quality of models (Process Diagrams) in Business Process Model and Notation (BPMN). The model was an instantiation of the developed metamodel that adopt ISO/IEC 1926. Objectives. The objectives of the thesis were to propose, implement and evaluate a model for quality assessment of business process models in BPMN. The model was intended to help practitioners to check the quality of their BPMN models and provide meaningful feedback on whether the business process models are of good or bad quality. First objective was to develop a metamodel of models for quality assessment of business process models in BPMN, and later the model that in an instantiation of the metamodel. Within the model, the objectives were to propose the relevant quality characteristics, quality metrics, quality criteria and quality functions. Finally, usefulness of model for quality assessment of business process models in BPMN was to be evaluated. Methods. The methodology was driven by essential elements of the model for quality assessment of business process models in BPMN. This is: quality characteristics, quality metrics, quality criteria and quality functions. In the beginning the metamodel of the model was developed based on the ISO/IEC 1926 standard. Later, in order to identify quality characteristics of models existing in the literature, a systematic literature review was conducted. Quality characteristics explicitly relevant to BPMN were compared against each other and selected. Overlapping quality characteristics relevant to BPMN were merged. Next, in order to obtain quality metrics that measure aspects of models of business processes, a literature review was carried out. The literature review was restricted by a proposed set of selection criteria. The criteria were questions that every relevant literature describing quality metrics must affirmatively answer in order to identify only metrics that were able to be assigned to identify quality characteristics. If the chosen quality metrics needed to be changed or adjusted for the sake of better results, the author added changes or adjustments and provided rationale for them. Next, in order to obtain quality criteria, values of the quality metrics were gathered through measuring a repository of BPMN models. The repository was gathered as a preparatory work for the thesis and it consisted of models of varying quality. Manual measurement of quality metrics for each BPMN model from the repository could not be done within a reasonable amount of time. Therefore, a tool to automatically calculate metrics for BPMN models was implemented. The quality criteria were proposed based on the results from interpretation of the values using statistical analysis. Later, quality functions that aggregate values of the metrics were proposed. The complete model was next integrated into the tool so that it could assess a quality of real BPMN models. Finally, the model for assessing the quality of business process models in BPMN was evaluated for usefulness through a survey and survey-based experiment. Results. A metamodel of models for quality assessment of business process models in BPMN was proposed. A model for the quality assessment of models in BPMN was proposed and evaluated for usefulness. Initial sets of quality characteristics of models were found in the literature and quality characteristics that were relevant to BPMN were extracted. Quality metrics that measure aspects of models were found and adjusted to the BPMN notation. Quality criteria that state how values of quality metrics can be classified as good or bad were provided. Quality functions that state if quality characteristics are good or bad for a chosen BPMN model were provided. Finally, a tool that implements the model for quality assessment of models in BPMN was created. Conclusions. The results of the survey and survey-based experiment showed that the proposed model for quality assessment of models in BPMN works in most cases and is needed in general. Additionally, the elements of the model which should be corrected were identified. Contacted users of BPMN expressed a will to use the suggested tool associated with the model for quality assessment of business process models in BPMN.

  • 461.
    Said Tahirshah, Farid
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för programvaruteknik.
    Comparison between Progressive Web App and Regular Web App2019Independent thesis Basic level (degree of Bachelor), 10 poäng / 15 hpOppgave
    Abstract [en]

    In 2015 the term Progressive Web Application was coined to describe applications that are getting the advantage of all Progressive App features. Some of the essential features are offline support, app-like interface, secure connection, etc. Since then, case studies from PWA’s implementation showed optimistic promises for improving web page performance, time spent on site, user engagement, etc. The goal of this report is to analyze some of the effects of PWA. This work will investigate the browser compatibility of PWA’s features, compare and analyze performance and memory consumption effect of PWA’s features compared to Regular WebApp. Results showed a lot of the features of PWA are still not sup-ported by some major browsers. Performance benchmark showed that required https connection for PWA is slowing down all of the PWA’s performance metrics on the first visit. On a repeat visit, some of the PWA features like speed index is outperforming the Regular Web App. Memory consumption on PWA increased more than 2 times the size of RWA. The conclusion is that even if some features are not directly supported by browsers, they still might have workaround solutions. PWA is slower than regular web app if https on your web server is not optimized. Different browsers have different memory limitations for PWA caches. You should implement https and PWA features only if you have HTTP/2 support on your web server, otherwise, performance can decrease.

  • 462.
    Sandberg, Emil
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för programvaruteknik.
    Creative Coding on the Web in p5.js: A Library Where JavaScript Meets Processing2019Independent thesis Basic level (degree of Bachelor), 10 poäng / 15 hpOppgave
    Abstract [en]

    Creative coding is the practice of writing code primarily for an expressive purpose rather than a functional one. It is mostly used in creative arts contexts. One of the most popular tools in creative coding is Processing. Processing is a desktop application and in recent years a web-based alternative named p5.js has been developed.

    This thesis investigates the p5.js JavaScript library. It looks at what can be accomplished with it and in which cases it might be used. The main focus is on the pros and cons of using p5.js for web graphics. Another point of focus is on how the web can be used as a creative platform with tools like p5.js. The goals are to provide an overview of p5.js and an evaluation of the p5.js library as a tool for creating interactive graphics and animations on the web.

    The research focuses on comparing p5.js with plain JavaScript from usability and performance perspectives and making general comparisons with other web-based frameworks for creative coding. The methods are a survey and interviews with members of creative coding communities, as well as performing coding experiments in p5.js and plain JavaScript and comparing the results and the process.

    The results from the coding experiments show that compared to plain JavaScript p5.js is easier to get started with, it is more intuitive, and code created in p5.js is easier to read. On the other hand, p5.js performs worse, especially when continuously drawing large amounts of elements to the screen. This is further supported by the survey and the interviews, which show that p5.js is liked for its usability, but that its performance issues and lack of advanced features mean that it is usually not considered for professional projects. The primary use case for p5.js is creating quick, visual prototypes. At the same time, the interviews show that p5.js has been used in a variety of contexts, both creative and practical.

    p5.js is a good library for getting started with coding creatively in the browser and is an excellent choice for experimenting and creating prototypes quickly. Should project requirements be much more advanced than that, there might be other options that will work better.

  • 463.
    Santos, Rodrigo
    et al.
    Fed Univ State Rio de Janeiro, BRA.
    Teixeira, Eldanae
    Univ Fed Rio de Janeiro, BRA.
    Mendes, Emilia
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik. Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för programvaruteknik.
    McGregor, John
    Clemson Univ, USA.
    2nd Workshop on Social, Human, and Economic Aspects of Software (WASHES) Special Edition for Software Reuse2017Inngår i: MASTERING SCALE AND COMPLEXITY IN SOFTWARE REUSE (ICSR 2017) / [ed] Botterweck, G Werner, C, SPRINGER INTERNATIONAL PUBLISHING AG , 2017, s. 223-224Konferansepaper (Fagfellevurdert)
    Abstract [en]

    The Special Edition for Software Reuse of the Workshop on Social, Human, and Economic Aspects of Software (WASHES) aims at bringing together researchers and practitioners who are interested in social, human, and economic aspects of software. WASHES is a forum to discuss models, methods, techniques, and tools to achieve software quality, improve reuse and deal with the existing issues in this context. This special edition's main topic is "Challenges of Reuse and the Social, Human, and Economic Aspects of Software". We believe it is important to investigate software reuse beyond the technical perspective and understand how the non-technical barriers of reuse affect practices, processes and tools in practice.

  • 464.
    Sathi, Veer Reddy
    et al.
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för programvaruteknik.
    Ramanujapura, Jai Simha
    A Quality Criteria Based Evaluation of Topic Models2016Independent thesis Advanced level (degree of Master (Two Years)), 20 poäng / 30 hpOppgave
    Abstract [en]

    Context. Software testing is the process, where a particular software product, or a system is executed, in order to find out the bugs, or issues which may otherwise degrade its performance. Software testing is usually done based on pre-defined test cases. A test case can be defined as a set of terms, or conditions that are used by the software testers to determine, if a particular system that is under test operates as it is supposed to or not. However, in numerous situations, test cases can be so many that executing each and every test case is practically impossible, as there may be many constraints. This causes the testers to prioritize the functions that are to be tested. This is where the ability of topic models can be exploited. Topic models are unsupervised machine learning algorithms that can explore large corpora of data, and classify them by identifying the hidden thematic structure in those corpora. Using topic models for test case prioritization can save a lot of time and resources.

    Objectives. In our study, we provide an overview of the amount of research that has been done in relation to topic models. We want to uncover various quality criteria, evaluation methods, and metrics that can be used to evaluate the topic models. Furthermore, we would also like to compare the performance of two topic models that are optimized for different quality criteria, on a particular interpretability task, and thereby determine the topic model that produces the best results for that task.

    Methods. A systematic mapping study was performed to gain an overview of the previous research that has been done on the evaluation of topic models. The mapping study focused on identifying quality criteria, evaluation methods, and metrics that have been used to evaluate topic models. The results of mapping study were then used to identify the most used quality criteria. The evaluation methods related to those criteria were then used to generate two optimized topic models. An experiment was conducted, where the topics generated from those two topic models were provided to a group of 20 subjects. The task was designed, so as to evaluate the interpretability of the generated topics. The performance of the two topic models was then compared by using the Precision, Recall, and F-measure.

    Results. Based on the results obtained from the mapping study, Latent Dirichlet Allocation (LDA) was found to be the most widely used topic model. Two LDA topic models were created, optimizing one for the quality criterion Generalizability (TG), and one for Interpretability (TI); using the Perplexity, and Point-wise Mutual Information (PMI) measures respectively. For the selected metrics, TI showed better performance, in Precision and F-measure, than TG. However, the performance of both TI and TG was comparable in case of Recall. The total run time of TI was also found to be significantly high than TG. The run time of TI was 46 hours, and 35 minutes, whereas for TG it was 3 hours, and 30 minutes.Conclusions. Looking at the F-measure, it can be concluded that the interpretability topic model (TI) performs better than the generalizability topic model (TG). However, while TI performed better in precision,

    Conclusions. Looking at the F-measure, it can be concluded that the interpretability topic model (TI) performs better than the generalizability topic model (TG). However, while TI performed better in precision, recall was comparable. Furthermore, the computational cost to create TI is significantly higher than for TG. Hence, we conclude that, the selection of the topic model optimization should be based on the aim of the task the model is used for. If the task requires high interpretability of the model, and precision is important, such as for the prioritization of test cases based on content, then TI would be the right choice, provided time is not a limiting factor. However, if the task aims at generating topics that provide a basic understanding of the concepts (i.e., interpretability is not a high priority), then TG is the most suitable choice; thus making it more suitable for time critical tasks.

  • 465.
    Sauerwein, Clemens
    et al.
    University of Innsbruck, AUT.
    Pekaric, Irdin
    University of Innsbruck, AUT.
    Felderer, Michael
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för programvaruteknik.
    Breu, Ruth
    University of Innsbruck, AUT.
    An Analysis and Classification of Public Information Security Data Sources used in Research and Practice2019Inngår i: Computers & security (Print), ISSN 0167-4048, E-ISSN 1872-6208, Vol. 82, s. 140-155Artikkel i tidsskrift (Fagfellevurdert)
    Abstract [en]

    In order to counteract today’s sophisticated and increasing number of cyber threats the timely acquisition of information regarding vulnerabilities, attacks, threats, countermeasures and risks is crucial. Therefore, employees tasked with information security risk management processes rely on a variety of information security data sources, ranging from inter-organizational threat intelligence sharing platforms to public information security data sources, such as mailing lists or expert blogs. However, research and practice lack a comprehensive overview about these public information security data sources, their characteristics and dependencies. Moreover, comprehensive knowledge about these sources would be beneficial to systematically use and integrate them to information security processes. In this paper, a triangulation study is conducted to identify and analyze public information security data sources. Furthermore, a taxonomy is introduced to classify and compare these data sources based on the following six dimensions: (1) Type of information, (2) Integrability, (3) Timeliness, (4) Originality, (5) Type of Source,and (6) Trustworthiness. In total, 68 public information security data sources were identified and classified. The investigations showed that research and practice rely on a large variety of heterogeneous information security data sources, which makes it more difficult to integrate and use them for information security and risk management processes.

  • 466.
    Schlick, Rupert
    et al.
    Austrian Institute of Technology, AUT.
    Felderer, Michael
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för programvaruteknik.
    Majzik, Istvan
    Budapest University of Technology and Economics, HUN.
    Nardone, Roberto
    Universita degli Studi di Napoli Federico II, ITA.
    Raschke, Alexander
    Universitat Ulm, DEU.
    Snook, Colin
    University of Southampton, GBR.
    Vittorini, Valeria
    Universita degli Studi di Napoli Federico II, ITA.
    A proposal of an example and experiments repository to foster industrial adoption of formal methods2018Inngår i: Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), Springer Verlag , 2018, Vol. 11247, s. 249-272Konferansepaper (Fagfellevurdert)
    Abstract [en]

    Formal methods (in a broad sense) have been around almost since the beginning of computer science. Nonetheless, there is a perception in the formal methods community that take-up by industry is low considering the potential benefits. We take a look at possible reasons and give candidate explanations for this effect. To address the issue, we propose a repository of industry-relevant example problems with an accompanying open data storage for experiment results in order to document, disseminate and compare exemplary solutions from formal model based methods. This would allow potential users from industry to better understand the available solutions and to more easily select and adopt a formal method that fits their needs. At the same time, it would foster the adoption of open data and good scientific practice in this research field. © Springer Nature Switzerland AG 2018.

  • 467.
    Seidi, Nahid
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för programvaruteknik.
    Document-Based Databases In Platform SW Architecture For Safety Related Embedded System2014Independent thesis Basic level (degree of Bachelor)Oppgave
    Abstract [en]

    The project is about the investigation on Document-Based databases, their evaluation criteria and use cases regarding requirements management, SW architecture and test management to set up an (ESLM) Embedded Systems Lifecycle Management tool. The current database used in the ESLM is a graph database called Neo4j, which meets the needs of the current system. The result of studying Document databases turned to the decision of not using a Document database for the system. Instead regarding the requirements, a combination of Graph database and Document database could be the practical solution in future.

  • 468.
    Selander, Nizar
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för programvaruteknik. Ericsson.
    Resource utilization comparison of Cassandra and Elasticsearch2019Independent thesis Basic level (degree of Bachelor), 10 poäng / 15 hpOppgave
    Abstract [en]

    Elasticsearch and Cassandra are two of the widely used databases today withElasticsearch showing a more recent resurgence due to its unique full text searchfeature, akin to that of a search engine, contrasting with the conventional querylanguage-based methods used to perform data searching and retrieval operations.

    The demand for more powerful and better performing yet more feature rich andflexible databases has ever been growing. This project attempts to study how the twodatabases perform under a specific workload of 2,000,000 fixed sized logs and underan environment where the two can be compared while maintaining the results of theexperiment meaningful for the production environment which they are intended for.

    A total of three benchmarks were carried, an Elasticsearch deployment using defaultconfiguration and two Cassandra deployments, a default configuration a long with amodified one which reflects a currently running configuration in production for thetask at hand.

    The benchmarks showed very interesting performance differences in terms of CPU,memory and disk space usage. Elasticsearch showed the best performance overallusing significantly less memory and disk space as well as CPU to some degree.

    However, the benchmarks were done in a very specific set of configurations and a veryspecific data set and workload. Those differences should be considered whencomparing the benchmark results.

  • 469.
    Selvi, Mehmet
    et al.
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för programvaruteknik.
    Büyükcan, Güral
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för programvaruteknik.
    Influential factors affecting the undesired fault correction outcomes in large-scaled companies2014Independent thesis Advanced level (degree of Master (Two Years))Oppgave
    Abstract [en]

    Context. Fault correction process is one of the two main activities in software evolution model. As it is very important for software maintainability, software industry especially large-scaled global companies, aim to have mature fault correction processes that detect faults and correct them in a continuous and efficient way. Considerable amount of effort is needed and some measures should be taken in order to be successful. This master thesis is mainly related with fault correction and finding possible solutions for better process. Objectives. The main aim of this study is to investigate and identify influential factors having affects over undesired fault correction outcomes. This study has three main stages: 1) to identify factors from company data that have affects over target factors, 2) to elicit influential factors from interviews and literature review, 3) to prioritize influential factors based on their significance. Based on the outcomes, giving recommendations to company and software industry is the other aim of this master thesis. Methods. This study mainly reflects the empirical research of software fault correction process and undesired outcomes of it. In this master thesis, both quantitative and qualitative data analysis were performed. Case study was conducted with Ericsson AB that data analysis was made with the archival data by using several methods including Machine Learning and Apriori. Also, surveys and semi-structured interviews were used for data collection instruments. Apart from this, literature review was performed in order to collect influential factors for fault correction process. Prioritization of the influential factors was made by using hierarchical cumulative voting. Results. Throughout the case study, quantitative data analysis, interviews and literature review was conducted and totally 45 influential factors were identified. By using these factors prioritization was performed with 26 practitioners (4 internal and 22 external) in order to find which factors are most a) significant and b) relevant in undesired fault correction outcomes. Based on the outcomes of prioritization, cause-effect diagram was drawn which includes all the important factors. Conclusions. This research showed that there are lots of factors influencing fault correction process. The practitioners mostly complained about the lack of analysis of deeply including correction of faults are not resulted the new requirements and they are not used for process improvement. Also, limited resources (such as work force, vacations and sickness), unbalanced fault correction task assignment and too much fault reports at the same time cause problems. Moreover, priorities of faults and customers affect the lead time of fault correction process as the critical faults are fixed at first.

  • 470.
    Sentilles, Severine
    et al.
    Malardalen Univ, SWE.
    Papatheocharous, Efi
    Swedish Inst Comp Sci, SWE.
    Ciccozzi, Federico
    Malardalen Univ, SWE.
    Petersen, Kai
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för programvaruteknik.
    A Property Model Ontology2016Inngår i: 2016 42ND EUROMICRO CONFERENCE ON SOFTWARE ENGINEERING AND ADVANCED APPLICATIONS (SEAA), 2016, s. 165-172Konferansepaper (Fagfellevurdert)
    Abstract [en]

    Efficient development of high quality software is tightly coupled to the ability of quickly taking complex decisions based on trustworthy facts. In component-based software engineering, the decisions related to selecting the most suitable component among functionally-equivalent ones are of paramount importance. Despite sharing the same functionality, components differ in terms of their extra-functional properties. Therefore, to make informed selections, it is crucial to evaluate extra-functional properties in a systematic way. To date, many properties and evaluation methods that are not necessarily compatible with each other exist. The property model ontology presented in this paper represents the first step towards providing a systematic way to describe extra-functional properties and their evaluation methods, and thus making them comparable. This is beneficial from two perspectives. First, it aids researchers in identifying comparable property models as a guide for empirical evaluations. Second, practitioners are supported in choosing among alternative evaluation methods for the properties of their interest. The use of the ontology is illustrated by instantiating a subset of property models relevant in the automotive domain.

  • 471.
    Settenvini, Matteo
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för programvaruteknik.
    Algorithmic Analysis of Name-Bounded Programs: From Java programs to Petri Nets via π-calculus2014Independent thesis Advanced level (degree of Master (Two Years))Oppgave
    Abstract [en]

    Context. Name-bounded analysis is a type of static analysis that allows us to take a concurrent program, abstract away from it, and check for some interesting properties, such as deadlock-freedom, or watching the propagation of variables across different components or layers of the system. Objectives. In this study we investigate the difficulties of giving a representation of computer programs in a name-bounded variation of π-calculus. Methods. A preliminary literature review is conducted to assess the presence (or lack thereof) of other successful translations from real-world programming languages to π-calculus, as well for the presence of relevant prior art in the modelling of concurrent systems. Results. This thesis gives a novel translation going from a relevant subset of the Java programming language, to its corresponding name-bounded π-calculus equivalent. In particular, the strengths of our translation are being able to dispose of names representing inactive objects when there are no circular references, and a transparent handling of polymorphism and dynamic method resolution. The resulting processes can then be further transformed into their Petri-Net representation, enabling us to check for important properties, such as reachability and coverability of program states. Conclusions. We conclude that some important properties that are not, in general, easy to check for concurrent programs, can be in fact be feasibly determined by giving a more constrained model in π-calculus first, and as Petri Nets afterwards.

  • 472.
    Seyff, Norbert
    et al.
    University of Applied Sciences and Arts Northwestern Switzerland, CHE.
    Stade, Melanie
    University of Applied Sciences and Arts Northwestern Switzerland, CHE.
    Fotrousi, Farnaz
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för programvaruteknik.
    Glinz, Martin
    University of Zurich, CHE.
    Guzman, Emitza
    University of Zurich, CHE.
    Kolpondinos-Huber, Martina
    University of Zurich, CHE.
    Arzapalo, Denisse Muñante
    Fondazione Bruno Kessler, ITA.
    Oriol, Marc
    Universitat Politècnica de Catalunya, ESP.
    Schaniel, Ronnie
    University of Applied Sciences and Arts Northwestern Switzerland, CHE.
    End-user driven feedback prioritization2017Inngår i: CEUR Workshop Proceedings / [ed] Ameller D.,Dieste O.,Knauss E.,Susi A.,Dalpiaz F.,Kifetew F.M.,Tenbergen B.,Palomares C.,Seffah A.,Forbrig P.,Berry D.M.,Daneva M.,Knauss A.,Siena A.,Daun M.,Herrmann A.,Kirikova M.,Groen E.C.,Horkoff J.,Maeder P.,Massacci F.,Ralyte J., CEUR-WS , 2017, Vol. 1796Konferansepaper (Fagfellevurdert)
    Abstract [en]

    End-user feedback is becoming more important for the evolution of software systems. There exist various communication channels for end-users (app stores, social networks) which allow them to express their experiences and requirements regarding a software application. End-users communicate a large amount of feedback via these channels which leads to open issues regarding the use of end-user feedback for software development, maintenance and evolution. This includes investigating how to identify relevant feedback scattered across different feedback channels and how to determine the priority of the feedback issues communicated. In this research preview paper, we discuss ideas for enduser driven feedback prioritization. © Copyright 2017 for this paper by its authors.

  • 473.
    shafiq, Hafiz Adnan
    et al.
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för programvaruteknik.
    Arshad, Zaki
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för programvaruteknik.
    Automated Debugging and Bug Fixing Solutions: A Systematic Literature Review and Classification2013Independent thesis Advanced level (degree of Master (Two Years))Oppgave
    Abstract [en]

    Context: Bug fixing is the process of ensuring correct source code and is done by developer. Automated debugging and bug fixing solutions minimize human intervention and hence minimize the chance of producing new bugs in the corrected program. Scope and Objectives: In this study we performed a detailed systematic literature review. The scope of work is to identify all those solutions that correct software automatically or semi-automatically. Solutions for automatic correction of software do not need human intervention while semi-automatic solutions facilitate a developer in fixing a bug. We aim to gather all such solutions to fix bugs in design, i.e., code, UML design, algorithms and software architecture. Automated detection, isolation and localization of bug are not in our scope. Moreover, we are only concerned with software bugs and excluding hardware and networking domains. Methods: A detailed systematic literature review (SLR) has been performed. A number of bibliographic sources are searched, including Inspec, IEEE Xplore, ACM digital library, Scopus, Springer Link and Google Scholar. Inclusion/exclusion, study quality assessment, data extraction and synthesis have been performed in depth according to guidelines provided for performing SLR. Grounded theory is used to analyze literature data. To check agreement level between two researchers, Kappa analysis is used. Results: Through SLR we identified 46 techniques. These techniques are classified in automated/semi-automated debugging and bug fixing. Strengths and weaknesses of each of them are identified, along with which types of bugs each can fix and in which language they can be implement. In the end, classification is performed which generate a list of approaches, techniques, tools, frameworks, methods and systems. Along, this classification and categorization we separated bug fixing and debugging on the bases of search algorithms. Conclusion: In conclusion achieved results are all automated/semi-automated debugging and bug fixing solutions that are available in literature. The strengths/benefits and weaknesses/limitations of these solutions are identified. We also recognize type of bugs that can be fixed using these solutions. And those programming languages in which these solutions can be implemented are discovered as well. In the end a detail classification is performed.

  • 474. Shah, Syed Muhammad Ali
    et al.
    Alvi, Usman Sattar
    Gencel, Cigdem
    Petersen, Kai
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för programvaruteknik.
    Comparing a Hybrid Testing Process with Scripted and Exploratory Testing: An Experimental Study with Practitioners2014Konferansepaper (Fagfellevurdert)
    Abstract [en]

    This study presents an experimental study comparing the testing quality of a Hybrid Testing (HT) process with the commonly used approaches in industry: Scripted Testing (ST) and Exploratory Testing (ET). The study was conducted in an international IT service company in Sweden with the involvement of six experienced testers. Two measures were used for comparison: 1) defect detection effectiveness (DDE) and 2) functionality coverage (FC). The results indicated that HT performed better in terms of DDE than ST and worse than ET. In terms of FC, HT performed better than ET, while no significant differences were observed between the HT and ST. Furthermore, HT performed best for experienced testers, but worse with less experienced testers.

  • 475.
    Shojaifar, Alireza
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för programvaruteknik.
    Evaluation and Improvement of the RSSI-based Localization Algorithm: Received Signal Strength Indication (RSSI)2015Independent thesis Advanced level (degree of Master (Two Years)), 20 poäng / 30 hpOppgave
    Abstract [en]

    Context: Wireless Sensor Networks (WSN) are applied to collect information by distributed sensor nodes (anchors) that are usually in fixed positions. Localization (estimating the location of objects) of moving sensors, devices or people which recognizes the location’s information of a moving object is one of the essential WSN services and main requirement. To find the location of a moving object, some of algorithms are based on RSSI (Received Signal Strength Indication). Since very accurate localization is not always feasible (cost, complexity and energy issues) requirement, RSSI-based method is a solution. This method has two specific features: it does not require extra hardware (cost and energy aspects) and theoretically RSSI is a function of distance.

    Objectives: In this thesis firstly, we develop an RSSI-based localization algorithm (server side application) to find the position of a moving object (target node) in different situations. These situations are defined in different experiments so that we observe and compare the results (finding accurate positioning). Secondly, since RSSI characteristic is highly related to the environment that an experiment is done in (moving, obstacles, temperature, humidity …) the importance and contribution of “environmental condition” in the empirical papers is studied.

    Methods: The first method which is a common LR (Literature Review) is carried out to find out general information about localization algorithms in (WSN) with focus on the RSSI-based method. This LR is based on papers and literature that are prepared by the collaborating company, the supervisor and also ad-hoc search in scientific IEEE database. By this method as well as relevant information, theoretical algorithm (mathematical function) and different effective parameters of the RSSI-based algorithm are defined. The second method is experimentation that is based on development of the mentioned algorithm (since experiment is usually performed in development, evaluation and problem solving research). Now, because we want to compare and evaluate results of the experiments with respect to environmental condition effect, the third method starts. The third method is SMS (Systematic mapping Study) that essentially focuses on the contribution of “environmental condition” effect in the empirical papers.

    Results: The results of 30 experiments and their analyses show a highly correlation between the RSSI values and environmental conditions. Also, the results of the experiments indicate that a direct signal path between a target node and anchors can improve the localization’s accuracy. Finally, the experiments’ results present that the target node’s antenna type has a clear effect on the RSSI values and in consequence distance measurement error. Our findings in the mapping study reveal that although there are a lot of studies about accuracy requirement in the context of the RSSI-based localization, there is a lack of research on the other localization requirements such as performance, reliability and stability. Also, there are a few studies which considered the RSSI localization in a real world condition.

    Conclusion: This thesis studies various localization methods and techniques in WSNs. Then, the thesis focuses on the RSSI-based localization by implementing one algorithm and analyzing the experiments’ results. In our experiments, we mostly focus on environmental parameters that affect localization’s accuracy. Moreover, we indicate some areas of research in this context which need more studies.

  • 476.
    Shojaifar, Alireza
    et al.
    Fachhochschule Nordwestschweiz, CHE.
    Fricker, Samuel
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för programvaruteknik.
    Gwerder, Martin
    Fachhochschule Nordwestschweiz, CHE.
    Elicitation of SME requirements for cybersecurity solutions by studying adherence to recommendations2018Inngår i: CEUR Workshop Proceedings / [ed] Dalpiaz F.,Franch X.,Kirikova M.,Ralyte J.,Spoletini P.,Chisik Y.,Ferrari A.,Madhavji N.,Palomares C.,Sabetzadeh M.,van der Linden D.,Schmid K.,Charrada E.B.,Sawyer P.,Forbrig P.,Zamansky A., CEUR-WS , 2018, Vol. 2075Konferansepaper (Fagfellevurdert)
    Abstract [en]

    [Context and motivation] Small and medium-sized enterprises (SME) have become the weak spot of our economy for cyber attacks. These companies are large in number and often do not have the controls in place to prevent successful attacks, respectively are not prepared to systematically manage their cybersecurity capabilities. [Question/problem] One of the reasons for why many SME do not adopt cybersecurity is that developers of cybersecurity solutions understand little the SME context and the requirements for successful use of these solutions. [Principal ideas/results] We elicit requirements by studying how cybersecurity experts provide advice to SME. The experts' recommendations offer insights into what important capabilities of the solution are and how these capabilities ought to be used for mitigating cybersecurity threats. The adoption of a recommendation hints at a correct match of the solution, hence successful consideration of requirements. Abandoned recommendations point to a misalignment that can be used as a source to inquire missed requirements. Re-occurrence of adoption or abandonment decisions corroborate the presence of requirements. [Contributions] This poster describes the challenges of SME regarding cybersecurity and introduces our proposed approach to elicit requirements for cybersecurity solutions. The poster describes CYSEC, our tool used to capture cybersecurity advice and help to scale cybersecurity requirements elicitation to a large number of participating SME. We conclude by outlining the planned research to develop and validate CYSEC1 Copyright 2018 for this paper by its authors.

  • 477.
    Silva, Dennis
    et al.
    Universidade Federal do Piaui, BRA.
    Rabelo, Ricardo
    Universidade Federal do Piaui, BRA.
    Campanha, Matheus
    Universidade Federal do Piaui, BRA.
    Neto, Pedro Santos
    Universidade Federal do Piaui, BRA.
    Oliveira, Pedro Almir
    Instituto Federal do Maranhão, BRA.
    Britto, Ricardo
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för programvaruteknik.
    A hybrid approach for test case prioritization and selection2016Inngår i: 2016 IEEE Congress on Evolutionary Computation, CEC 2016, IEEE, 2016, s. 4508-4515Konferansepaper (Fagfellevurdert)
    Abstract [en]

    Software testing consists in the dynamic verification of the behavior of a program on a set of test cases. When a program is modified, it must be tested to verify if the changes did not imply undesirable effects on its functionality. The rerunning of all test cases can be impossible, due to cost, time and resource constraints. So, it is required the creation of a test cases subset before the test execution. This is a hard problem and the use of standard Software Engineering techniques could not be suitable. This work presents an approach for test case prioritization and selection, based in relevant inputs obtained from a software development environment. The approach uses Software Quality Function Deployment (SQFD) to deploy the features relevance among the system components, Mamdani fuzzy inference systems to infer the criticality of each class and Ant Colony Optimization to select test cases. An evaluation of the approach is presented, using data from simulations with different number of tests.

  • 478.
    Silva, Lakmal
    et al.
    Ericsson, SWE.
    Unterkalmsteiner, Michael
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för programvaruteknik.
    Wnuk, Krzysztof
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för programvaruteknik.
    Monitoring and maintenance of telecommunication systems: Challenges and research perspectives2019Inngår i: ENGINEERING SOFTWARE SYSTEMS: RESEARCH AND PRAXIS / [ed] Kosiuczenko, P; Zielinski, Z, Springer Verlag , 2019, 830, Vol. 830, s. 166-172Konferansepaper (Fagfellevurdert)
    Abstract [en]

    In this paper, we present challenges associated with monitoring and maintaining a large telecom system at Ericsson that was developed with high degree of component reuse. The system constitutes of multiple services, composed of both legacy and modern systems that are constantly changing and need to be adapted to changing business needs. The paper is based on firsthand experience from architecting, developing and maintaining such a system, pointing out current challenges and potential avenues for future research that might contribute to addressing them. © Springer Nature Switzerland AG 2019.

  • 479.
    Silvander, Johan
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för programvaruteknik.
    Business Process Optimization with Reinforcement Learning2019Inngår i: Lect. Notes Bus. Inf. Process., Springer Verlag , 2019, Vol. 356, s. 203-212Konferansepaper (Fagfellevurdert)
    Abstract [en]

    We investigate the use of deep reinforcement learning to optimize business processes in a business support system. The focus of this paper is to investigate how a reinforcement learning algorithm named Q-Learning, using deep learning, can be configured in order to support optimization of business processes in an environment which includes some degree of uncertainty. We make the investigation possible by implementing a software agent with the help of a deep learning tool set. The study shows that reinforcement learning is a useful technique for business process optimization but more guidance regarding parameter setting is needed in this area. © 2019, Springer Nature Switzerland AG.

  • 480.
    Silvander, Johan
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för programvaruteknik.
    Component Selection with Fuzzy Decision Making2018Inngår i: Procedia Computer Science, Elsevier B.V. , 2018, Vol. 126, s. 1378-1386Konferansepaper (Fagfellevurdert)
    Abstract [en]

    In many situations a decision maker (DM) would like to grade a component, or rank several components of the same type. Often a component type has many features, which are deemed as valuable by the DM. Other vital features are not known by the DM but are needed for the component to function. However, it should be possible to guide the DM to find the desired business solution, without putting a requirement of detailed knowledge of the component type on the DM. We propose a framework for component selection with the help of fuzzy decision making. The work is based on algorithms from fuzzy decision making, which we have adapted or extended. The framework was validated by practitioners, which found the framework useful. © 2018 The Author(s).

  • 481.
    Silvander, Johan
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för programvaruteknik.
    Towards Intent-Driven Systems2017Licentiatavhandling, med artikler (Annet vitenskapelig)
    Abstract [en]

    Context: Software supporting an enterprise’s business, also known as a business support system, needs to support the correlation of activities between actors as well as influence the activities based on knowledge about the value networks in which the enterprise acts. This can be supported with the help of intent-driven systems. The aim of intent-driven systems is to capture stakeholders’ intents and transform these into a form that enables computer processing of them. Only then are different machine actors able to negotiate with each other on behalf of their respective stakeholders and their intents, and suggest a mutually beneficial agreement.

    Objective: When building a business support system it is critical to separate the business model of the business support system itself from the business models used by the enterprise which is using the business support system. The core idea of intent-driven systems is the possibility to change behavior of the system itself, based on stakeholder intents. This requires a separation of concerns between the parts of the system used to execute the stakeholder business, and the parts which are used to design the business based on stakeholder intents. The business studio is a software that supports the realization of business models used by the enterprise by configuring the capabilities provided by the business support system. The aim is to find out how we can support the design of a business studio which is based on intent-driven systems.

    Method: We are using the design science framework as our research frame- work. During our design science study we have used the following research methods: systematic literature review, case study, quasi experiment, and action research.

    Results: We have produced two design artifacts as a start to be able to support the design of a business studio. These artifacts are the models and quasi-experiment in Chapter 3, and the action research in Chapter 4. The models found during the case study have proved to be a valuable artifact for the stakeholder. The results from the quasi-experiment and the action research are seen as new problem solving knowledge by the stakeholder.

    Conclusion: The synthesis shows a need for further research regarding semantic interchange of information, actor interaction in intent-driven systems, and the governance of intent-driven systems.

  • 482.
    Silvander, Johan
    et al.
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för programvaruteknik.
    Svahnberg, Mikael
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för programvaruteknik.
    A Systematic Literature Review on Intent-Driven SystemsInngår i: Artikkel i tidsskrift (Fagfellevurdert)
    Abstract [en]

    Context: The aim of intent-driven systems is to capture stakeholders’ intents and transform these into a form that enables computer processing of the intents. Only then are different computer- based agents able to negotiate with each other on behalf of their respective stakeholders and their intents, and suggest a mutually beneficial agreement. This requires a separation of concerns between the parts of the system used to execute the stakeholder business, and the parts which are used to design the business based on stakeholder intents.

    Objective: The aim is to find out which methods/techniques as well as enabling aspects, useful for an intent-driven system, that are covered by research literature.

    Method: As a part of a design science study, a Systematic Literature Review is conducted.

    Results: The existence of methods/techniques which can be used as building blocks to construct intent-driven systems exist in the literature. How these methods/techniques can interact with the aspects needed to enabling flexible realizations of intent-driven systems is not evident in the existing literature.

    Conclusion: The synthesis shows a need for further research regarding semantic interchange of information, actor interaction in intent-driven systems, and the governance of intent-driven systems.

  • 483.
    Silvander, Johan
    et al.
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för programvaruteknik.
    Svahnberg, Mikael
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för programvaruteknik.
    Towards Executable Business Rules2017Annet (Fagfellevurdert)
    Abstract [en]

    Context:  In today's implementations of business support systems, business rules are configured in different places of the system, and in different formats. This makes it hard to have a common view of what is defined, and to execute the same logic in different parts of systems. It is desired to have a common governance structure and a standardized way of handling the business rules.

    Objective: To investigate if it is possible to support visual and logical verification of business rules and to generate executable business rules.

    Method: Together with practitioners we conducted an experiment.

    Results: We have implemented a machine learning pipe-line which supports visual and logical verification of business rules, and the generation of executable business rules. From a machine learning perspective, we have added the possibility for the ID3 algorithm to use continuous features.

    Conclusion: The experiment shows that it is possible to support visual and logical verification of business rules, and to generate executable business rules with the help of a machine learning pipe-line.

  • 484.
    Silvander, Johan
    et al.
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för programvaruteknik.
    Svahnberg, Mikael
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för programvaruteknik.
    Uncover and Assess Rule Adherence Based on Decisions2018Inngår i: Lecture Notes in Business Information Processing / [ed] Shishkov B., Springer Verlag , 2018, Vol. 319, s. 249-259Konferansepaper (Fagfellevurdert)
    Abstract [en]

    Context: Decisions taken by medical practitioners may be based on explicit and implicit rules. By uncovering these rules, a medical practitioner may have the possibility to explain its decisions in a better way, both to itself and to the person which the decision is affecting. Objective: We investigate if it is possible for a machine learning pipe-line to uncover rules used by medical practitioners, when they decide if a patient could be operated or not. The uncovered rules should have a linguistic meaning. Method: We are evaluating two different algorithms, one of them is developed by us and named “The membership detection algorithm”. The evaluation is done with the help of real-world data provided by a hospital. Results: The membership detection algorithm has significantly better relevance measure, compared to the second algorithm. Conclusion: A machine learning pipe-line, based on our algorithm, makes it possibility to give the medical practitioners an understanding, or to question, how decisions have been taken. With the help of the uncovered fuzzy decision algorithm it is possible to test suggested changes to the feature limits. © Springer International Publishing AG, part of Springer Nature 2018.

  • 485.
    Silvander, Johan
    et al.
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för programvaruteknik.
    Svahnberg, Mikael
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för programvaruteknik.
    Uncovering Implicit Rules in Medicine DiagnosisInngår i: Artikkel i tidsskrift (Fagfellevurdert)
    Abstract [en]

    Context:  Decisions taken by experts may be based on explicit and implicit rules. By uncovering the implicit rules the expert may have the possibility to explain its decisions in a better way, both for itself and the person which the decision is affecting. In the area of medicine, laws are enforcing the expert to be able to explain its decision when a patient is complaining about a decision. Another vital aspect is the ability of the expert to explain to the patient why a certain decision is taken, and the risks associated with the decision.

    Objective: To investigate if it is possible for a machine learning pipe-line to find implicit rules used by experts, when they decide if a patient could be operated or not.

    Method: We conduct an analysis of a data set, containing information about patients and the decision if an operation should be performed or not.

    Results: We have implemented a machine learning pipe-line which supports detection of implicit rules in a data set. The detection of the implicit rules are supported by an algorithm which implements an agglomerative merging of feature values. We have improved the original algorithm by showing the boarders of the feature values of a discretization bin.

    Conclusion: The analysis of the data set shows it is possible to find implicit rules used by the experts with the help of an agglomerative merging of feature values.

  • 486.
    Silvander, Johan
    et al.
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för programvaruteknik.
    Wilson, Magnus
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för programvaruteknik.
    Wnuk, Krzysztof
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för programvaruteknik.
    Encouraging Business Flexibility by Improved Context Descriptions2017Inngår i: Proceedings of the Seventh International Symposium on Business Modeling and Software Design / [ed] Boris Shishkov, SciTePress, 2017, Vol. 1, s. 225-228Konferansepaper (Fagfellevurdert)
    Abstract [en]

    Business-driven software architectures are emerging and gaining importance for many industries. As softwareintensive solutions continue to be more complex and operate in rapidly changing environments, there is a pressure for increased business flexibility realized by more efficient software architecture mechanisms to keep up with the necessary speed of change. We investigate how improved context descriptions could be implemented in software components, and support important software development practices like business modeling and requirement engineering. This paper proposes context descriptions as an architectural support for improving the connection between business flexibility and software components. We provide initial results regarding software architectural mechanisms which can support context descriptions as well as the context description’s support for business-driven software architecture, and the business flexibility demanded by the business ecosystems.

  • 487.
    Silvander, Johan
    et al.
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för programvaruteknik.
    Wilson, Magnus
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för programvaruteknik.
    Wnuk, Krzysztof
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för programvaruteknik.
    Svahnberg, Mikael
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för programvaruteknik.
    Supporting Continuous Changes to Business Intents2017Inngår i: International journal of software engineering and knowledge engineering, ISSN 0218-1940, Vol. 27, nr 8, s. 1167-1198Artikkel i tidsskrift (Fagfellevurdert)
    Abstract [en]

    Context: Software supporting an enterprise’s business, also known as a business support system, needs to support the correlation of activities between actors as well as influence the activities based on knowledge about the value networks in which the enterprise acts. This requires the use of policies and rules to guide or enforce the execution of strategies or tactics within an enterprise as well as in collaborations between enterprises. With the help of policies and rules, an enterprise is able to capture an actor’s intent in its business support system, and act according to this intent on behalf of the actor. Since the value networks an enterprise is part of will change over time the business intents’ life cycle states might change. Achieving the changes in an effective and efficient way requires knowledge about the affected intents and the correlation between intents.

    Objective: The aim of the study is to identify how a business support system can support continuous changes to business intents. The first step is to find a theoretical model which serves as a foundation for intent-driven systems.

    Method: We conducted a case study using a focus group approach with employees from Ericsson. This case study was influenced by the spiral case study process.

    Results: The study resulted in a model supporting continuous definition and execution of an enterprise. The model is divided into three layers; Define, Execute, and a com- mon governance view layer. This makes it possible to support continuous definition and execution of business intents and to identify the actors needed to support the business intents’ life cycles. This model is supported by a meta-model for capturing information into viewpoints.

    Conclusion: The research question is addressed by suggesting a solution supporting con- tinuous definition and execution of an enterprise as a model of value architecture compo- nents and business functions. The results will affect how Ericsson will build the business studio for their next generation business support systems.

  • 488.
    Silvander, Johan
    et al.
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för programvaruteknik.
    Wälitalo, Lisa
    Blekinge Tekniska Högskola, Fakulteten för teknikvetenskaper, Institutionen för strategisk hållbar utveckling.
    Knowledge creation through a teaching and learning spiral2016Konferansepaper (Fagfellevurdert)
    Abstract [en]

    Context: We have experienced the use of a domain specific language sometimes makes it difficult to present domain knowledge to a group or an individual that has limited or different knowledge about the specific domain, and where the presenter and the audience do not have sufficient insight into each other's contexts. In order to create an environment w here knowledge transfer can exists it is vital to understand how the roles are shifting during the interaction between the participants. In an educational environment Teaching and Learning Activities (TLA) could, in ideal situations, be invented during the design of the curriculum. This might not be the case when interacting with practitioners or students from diverse fields. This situation requires a method to find TLAs for the specific situation. For the domain knowledge to be useful for learners it has to be connected to the context/domain where the learners are active. In this paper we combine a spiral learning process with constructive alignment, which resulted in a teaching and learning spiral process. The outcome of the teach - ing and learning spiral process is to provide the knowledge of using the introduced domain knowledge in a context/domain where the learners are active.

    Objective: The aim with this work is to present guidelines that will contribute to a more effective knowledge creation process in heterogeneous groups, both in an educational environment and in interaction with different groups of practitioners in society.

    Method: We conducted a case study using observations and surveys.

    Results: The results from our case study support a positive effect on the learning outcomes when adopting this methodology. The learning outcome is to gain deeper understanding of the introduced domain knowledge and being able to dis - cuss how the new domain knowledge can be integrated to the learners own context.

    Conclusions: We have formulated guidelines for how to use the teaching and learning spiral process in an effective and efficient way.

  • 489. Solinski, Adam
    et al.
    Petersen, Kai
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för programvaruteknik.
    Prioritizing agile benefits and limitations in relation to practice usage2016Inngår i: Software quality journal, ISSN 0963-9314, E-ISSN 1573-1367, Vol. 24, nr 2, s. 447-482Artikkel i tidsskrift (Fagfellevurdert)
    Abstract [en]

    In recent years, there has been significant shift from rigid development (RD) toward agile. However, it has also been spotted that agile methodologies are hardly ever followed in their pure form. Hybrid processes as combinations of RD and agile practices emerge. In addition, agile adoption has been reported to result in both benefits and limitations. This exploratory study (a) identifies development models based on RD and agile practice usage by practitioners; (b) identifies agile practice adoption scenarios based on eliciting practice usage over time; (c) prioritizes agile benefits and limitations in relation to (a) and (b). Practitioners provided answers through a questionnaire. The development models are determined using hierarchical cluster analysis. The use of practices over time is captured through an interactive board with practices and time indication sliders. This study uses the extended hierarchical voting analysis framework to investigate benefit and limitation prioritization. Four types of development models and six adoption scenarios have been identified. Overall, 45 practitioners participated in the prioritization study. A common benefit among all models and adoption patterns is knowledge and learning, while high requirements on professional skills were perceived as the main limitation. Furthermore, significant variances in terms of benefits and limitations have been observed between models and adoption patterns. The most significant internal benefit categories from adopting agile are knowledge and learning, employee satisfaction, social skill development, and feedback and confidence. Professional skill-specific demands, scalability, and lack of suitability for specific product domains are the main limitations of agile practice usage. Having a balanced agile process allows to achieve a high number of benefits. With respect to adoption, a big bang transition from RD to agile leads to poor quality in comparison with the alternatives.

  • 490.
    Somaraju, Dilip
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för programvaruteknik.
    Prediction of Time, Cost and Effort needed for software organizations to transit from ISO 9001:2008 to ISO 9001:2015.: A Survey2016Independent thesis Advanced level (degree of Master (Two Years)), 20 poäng / 30 hpOppgave
    Abstract [en]

    Context. Several quality standards have been developed over the years in order to define quality metrics for an organization’s product and even processes. One of the famous standards among them is the ISO 9000 standards which started several years ago. Since its beginning, ISO standards have seen several upgrades. Currently ISO 9001:2008 is in use which is being upgraded to ISO 9001: 2015. Companies have to migrate to the new scheme within three years of the prescribed time in order to retain certification to the ISO 9001 standards. The present thesis is targeted at finding the expected changes and the work improvements in the context of software engineering.

    Objectives. The main aim of the study is to find the expected changes and work improvements needed to migrate to the new version. This is done by fulfilling the following objectives, they are: analyze the expected changes and motivations for the changes in the new ISO 9001 version. Understand the required work and improvements needed for a software organization to successfully upgrade their certification to the new ISO 9001:2015 version. Predict the estimated cost/time /effort that could be incurred for organization to get certified to the forthcoming ISO version.

    Methods. In order to meet the objectives, a literature review was done and the changes incorporated in the new scheme are identified. A survey was conducted in order to predict the impact of cost, time and effort on the new changes in moving to ISO 9001:2008 to ISO 9001:2015. The survey was sent only to software organizations as the context of this study is only restricted to quality in software engineering. The collected data was analyzed using bi-variate analysis and Friedman test on SPSS tool.

    Results. From the literature review, the changes brought about in the new scheme were identified. These changes made were used in the survey questionnaire designed. The survey questionnaire was designed to investigate the expectations of the organizations on the time taken, cost incurred and the effort needed to implement these changes. A total of 63 responses were recorded from the survey.

    Conclusions. From the analysis it was found that several key changes were identified in the new scheme when compared to the old one. From the survey responses, the cost needed for implementing the changes is expected to be moderate, the time needed is predicted as less than 1 year and the effort needed for implementing the changes was estimated to be more. Along with this, the document also holds clear results about clause by clause expected time, cost and effort estimates and the reasons for these assumptions.

  • 491.
    Spandel, Daniel
    et al.
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för programvaruteknik.
    Kjellgren, Johannes
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för programvaruteknik.
    Choosing between Git and Subversion: How does the choice affect software developers?2014Independent thesis Basic level (degree of Bachelor)Oppgave
    Abstract [en]

    Today a lot of software projects are using version control systems for maintaining their software source code. There are a lot of version control systems, and the choice of which one to choose is far from simple. Today the two biggest version control systems are Git and Subversion. In this paper we have found the main differences between the two, and investigated how the choice between them affects software developers. Although software developers in many aspects are unaffected by the choice, we did find some interesting findings. When using Git, our empirical study shows that software developers seem to check in their code to the main repository more frequently than they do when using Subversion. We also found indications that software developers tend to use Subversion with a graphical interface, whereas the preferred interface for working with Git seems to be command-line. We were also surprised of how insignificant the learning aspect of the systems seems to be for the developers. Our goal with this paper is to provide a foundation to stand upon when choosing what version control system to use for a software project.

  • 492.
    Stade, Melanie
    et al.
    University of Applied Sciences and Arts Northwestern Switzerland, CHE.
    Fotrousi, Farnaz
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för programvaruteknik.
    Seyff, Norbert
    University of Applied Sciences and Arts Northwestern Switzerland, CHE.
    Albrecht, Oliver
    SEnerCon GmbH, DEU.
    Feedback Gathering from an Industrial Point of View2017Inngår i: Proceedings - 2017 IEEE 25th International Requirements Engineering Conference, RE 2017, Institute of Electrical and Electronics Engineers Inc. , 2017, s. 71-79Konferansepaper (Fagfellevurdert)
    Abstract [en]

    Feedback communication channels allow end-users to express their needs, which can be considered in software development and evolution. Although feedback gathering and analysis have been identified as an important topic and several researchers have started their investigation, information is scarce on how software companies currently elicit end-user feedback. In this study, we explore the experiences of software companies with respect to feedback gathering. The results of a case study and online survey indicate two sides of the same coin: On the one hand, most software companies are aware of the relevance of end-user feedback for software evolution and provide feedback channels, which allow end-users to communicate their needs and problems. On the other hand, the quantity and quality of the feedback received varies. We conclude that software companies still do not fully exploit the potential of end-user feedback for software development and evolution. © 2017 IEEE.

  • 493.
    Stade, Melanie
    et al.
    University of Applied Sciences and Arts Northwestern Switzerland, CHE.
    Oriol, Marc
    Universitat Politecnica de Catalunya, ESP.
    Cabrera, Oscar
    Universitat Politecnica de Catalunya, ESP.
    Fotrousi, Farnaz
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för programvaruteknik.
    Schaniel, Ronnie
    University of Applied Sciences and Arts Northwestern Switzerland, CHE.
    Seyff, Norberg
    University of Applied Sciences and Arts Northwestern Switzerland, CHE.
    Schmidt, Oleg
    SEnerCon GmbH, DEU.
    Providing a user forum is not enough: First experiences of a software company with CrowdRE2017Inngår i: Proceedings - 2017 IEEE 25th International Requirements Engineering Conference Workshops, REW 2017, Institute of Electrical and Electronics Engineers Inc. , 2017, s. 164-169Konferansepaper (Fagfellevurdert)
    Abstract [en]

    Crowd-based requirements engineering (CrowdRE) is promising to derive requirements by gathering and analyzing information from the crowd. Setting up CrowdRE in practice seems challenging, although first solutions to support CrowdRE exist. In this paper, we report on a German software company's experience on crowd involvement by using feedback communication channels and a monitoring solution for user-event data. In our case study, we identified several problem areas that a software company is confronted with to setup an environment for gathering requirements from the crowd. We conclude that a CrowdRE process cannot be implemented ad-hoc and that future work is needed to create and analyze a continuous feedback and monitoring data stream. © 2017 IEEE.

  • 494.
    Starefors, Henrik
    et al.
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för programvaruteknik.
    Persson, Rasmus
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för programvaruteknik.
    MLID: A multilabelextension of the ID3 algorithm2016Independent thesis Basic level (degree of Bachelor), 10 poäng / 15 hpOppgave
    Abstract [en]

    AbstractMachine learning is a subfield within artificial intelligence that revolves around constructingalgorithms that can learn from, and make predictions on data. Instead of following strict andstatic instruction, the system operates by adapting and learning from input data in order tomake predictions and decisions. This work will focus on a subcategory of machine learningcalled “MultilabelClassification”, which is the concept of where items introduced to thesystem is categorized by an analytical model, learned through supervised learning, whereeach instance of the dataset can belong to multiple labels, or classes.This paper presents the task of implementing a multilabelclassifier based on the ID3algorithm, which we call MLID (MultilabelIterative Dichotomiser). The solution is presentedboth in a sequentially executed version as well as an parallelized one.We also presents acomparison based on accuracy and execution time, that is performed against algorithms of asimilar nature in order to evaluate the viability of using ID3 as a base to further expand andbuild upon in regards of multi label classification.In order to evaluate the performance of the MLID algorithm, we have measured theexecution time, accuracy, and made a summarization of precision and recall into what iscalled Fmeasure,which is the harmonic mean of both precision and sensitivity of thealgorithm. These results are then compared to already defined and established algorithms,on a range of datasets of varying sizes, in order to assess the viability of the MLID algorithm.The results produced when comparing MLID against other multilabelalgorithms such asBinary relevance, Classifier Chains and Random Trees shows that MLID can compete withother classifiers in term of accuracy and Fmeasure,but in terms of training the algorithm,the time required is proven inferior. Through these results, we can conclude that MLID is aviable option to use as a multilabelclassifier. Although, some constraints inherited from theoriginal ID3 algorithm does impede the full utility of the algorithm, we are certain thatfollowing the same path of development and improvement as ID3 experienced would allowMLID to develop towards a suitable choice of algorithm for a diverse range of multilabelclassification problems.

  • 495.
    Strandberg, Jane
    et al.
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för programvaruteknik.
    Lyckne, Mattias
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för programvaruteknik.
    Webbsäkerhet och vanliga brister: kunskapsläget bland utvecklare2014Independent thesis Basic level (degree of Bachelor)Oppgave
    Abstract [sv]

    This bachelor thesis looks at developers knowledge about web security both regarding their own view on their knowledge and their actual knowledge about vulnerabilities and how you mitigate against them. Web developers knowledge regarding web security are becoming more and more important as more applications and services moves to the web and more and more items become connected to the internet. We are doing this by conducting a survey among developers that are currently studying in the field or are working in the field to get a grip on how the knowledge is regarding the most common security concepts. What we saw was that the result varies between the different concepts and many lack much of the knowledge in web security that is getting increasingly more important to have.

  • 496.
    Sulaman, Sardar Muhammad
    et al.
    Lund University, SWE.
    Beer, Armin
    Beer Test Consulting, AUT.
    Felderer, Michael
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för programvaruteknik.
    Höst, Martin
    Lund University, SWE.
    Comparison of the FMEA and STPA safety analysis methods: a case study2019Inngår i: Software quality journal, ISSN 0963-9314, E-ISSN 1573-1367, Vol. 27, nr 1, s. 349-387Artikkel i tidsskrift (Fagfellevurdert)
    Abstract [en]

    As our society becomes more and more dependent on IT systems, failures of these systems can harm more and more people and organizations. Diligently performing risk and hazard analysis helps to minimize the potential harm of IT system failures on the society and increases the probability of their undisturbed operation. Risk and hazard analysis is an important activity for the development and operation of critical software intensive systems, but the increased complexity and size puts additional requirements on the effectiveness of risk and hazard analysis methods. This paper presents a qualitative comparison of two hazard analysis methods, failure mode and effect analysis (FMEA) and system theoretic process analysis (STPA), using case study research methodology. Both methods have been applied on the same forward collision avoidance system to compare the effectiveness of the methods and to investigate what are the main differences between them. Furthermore, this study also evaluates the analysis process of both methods by using a qualitative criteria derived from the technology acceptance model (TAM). The results of the FMEA analysis were compared to the results of the STPA analysis, which were presented in a previous study. Both analyses were conducted on the same forward collision avoidance system. The comparison shows that FMEA and STPA deliver similar analysis results.

  • 497.
    Sun, Tao
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för programvaruteknik.
    Product Context Analysis with Twitter Data2016Independent thesis Advanced level (degree of Master (Two Years)), 20 poäng / 30 hpOppgave
    Abstract [en]

    Context. For the product manager, the product context analysis, which aims to align their products to the market needs, is very important. By understanding the market needs, the product manager knows the product context information about the environment the products conceived and the business the products take place. The product context analysis using the product context information helps the product manager find the accurate position of his/her products and support the decision-making of the products. The product context information generally could be found in the user feedbacks. And the traditional techniques of acquiring the user feedbacks can be replaced by collecting the existed online user feedbacks with a cheaper cost. Therefore, researchers did studies on the online user feedbacks and the results showed those user feedbacks contain the product context information. Therefore, in this study, I tried to elicit the product context information from the user feedbacks posted on Twitter.

    Objectives. Objectives of this study are 1. I investigated what kinds of Apps can be used to collect   more related Tweets, and 2. I investigated what kinds of product context information can be elicited from the collected Tweets.

    Methods. To achieve the first objective, I designed unified criteria for selecting Apps and collecting App-related Tweets, and then conduct the statistical analysis to find out what is/are the factor(s) affect (s) the Tweets collection. To achieve the second objective, I conducted the directed content analysis on the collected Tweets with an indicator for identifying the product context information, and then make a descriptive statistical analysis of the elicited product context information.

    Results. I found the top-ranked Apps or Apps in few themes like “Health and Fitness” and “Games” have more and fresher App-related Tweets. And from my collected Tweets, I can elicit at least 15 types of product context information, the types include “user experience”, “use case”, “partner”, “competitor”, “platforms” and so on.

    Conclusions. This is an exploratory study of eliciting product context information from the Tweets. It presented the method of collecting the App-related Tweets and eliciting product context information from the collected Tweets. It showed what kinds of App are suitable to do so and what types of product context information can be elicited from the Tweets. This study let us be aware of that the Tweets can be used for the product context analysis, and let us know the appropriate condition to use the Tweets for the product context analysis.

  • 498.
    Sundelin, Anders
    et al.
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för programvaruteknik.
    Gonzalez-Huerta, Javier
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för programvaruteknik.
    Wnuk, Krzysztof
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för programvaruteknik.
    Test-Driving FinTech Product Development: An Experience Report2018Inngår i: Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) / [ed] Ciolkowski M.,Hebig R.,Kuhrmann M.,Pfahl D.,Tell P.,Amasaki S.,Kupper S.,Schneider K.,Klunder J., Springer, 2018, Vol. 112171, s. 219-226Konferansepaper (Fagfellevurdert)
    Abstract [en]

    In this paper, we present experiences from eight years of developing a financial transaction engine, using what can be described as an integration-test-centric software development process.We discuss the product and the relation between three different categories of its software and how the relative weight of these artifacts has varied over the years.In addition to the presentation, some challenges and future research directions are discussed.

  • 499.
    Svahnberg, Mikael
    et al.
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för programvaruteknik.
    Gorschek, Tony
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för programvaruteknik.
    A model for assessing and re-assessing the value of software reuse2017Inngår i: Journal of Software: Evolution and Process, ISSN 2047-7473, E-ISSN 2047-7481, Vol. 29, nr 4Artikkel i tidsskrift (Fagfellevurdert)
    Abstract [en]

    Background: Software reuse is often seen as a cost avoidance rather than a gained value. This results in a rather one-sided debate where issues such a resource control, release schedule, quality, or reuse in more than one release are neglected. Aims: We propose a reuse value assessment framework, intended to provide a more nuanced view of the value and costs associated with different reuse candidates. Method: This framework is constructed based on findings from an interview study at a large software development company. Results: The framework considers the functionality, compliance to standards, provided quality, and provided support of a reuse candidate, thus enabling an informed comparison between different reuse candidates. Furthermore, the framework provides means for tracking the value of the reused asset throughout subsequent releases. Conclusions: The reuse value assessment framework is a tool to assist in the selection between different reuse candidates. The framework also provides a means to assess the current value of a reusable asset in a product, which can be used to indicate where maintenance efforts would increase the utilized potential of the reusable asset.

  • 500.
    Svedklint, Mattias
    et al.
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för programvaruteknik.
    Bellstrand, Magnus
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för programvaruteknik.
    Prestanda och webbramverk2014Independent thesis Basic level (degree of Bachelor)Oppgave
    Abstract [sv]

    I denna studie undersöktes det tio vanliga ramverk inom webb branschen, både de mest använda ramverken samt några nya uppstickare som har växt mycket de senaste åren. För att skala upp en hemsida till många användare är det viktigt att strukturen bakom sidan presterar bra, därför är det viktigt att välja rätt ramverk. Så hur ska en webbutvecklare då välja ramverk för att kunna uppnå en bra prestanda? Det är allmänt känt att användare lämnar sidor när responstiden ökar. Prestandan försämras snabbt när dynamiskt innehåll hanteras, vilket medför ökade hårdvarukostnader för att kunna hantera prestanda problemen. För att lösa detta så bidrar denna undersökning med riktlinjer för valet av rätt ramverk. Genom att prestandatester utfördes på tio utvalda ramverk, och där efter listades de snabbaste ramverken blev det ett resultat som visar på det ramverk som presterar bäst. Det utfördes även en observation av installationens utförande för att få reda på problematik som kan uppstå när respektive ramverk installeras. Det noterades även hur bra respektive ramverks manual hjälpte till för att guida installationen och att lösa problem som uppstod under installationen och konfigurationen av ramverken.

78910111213 451 - 500 of 608
RefereraExporteraLink til resultatlisten
Permanent link
Referera
Referensformat
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Annet format
Fler format
Språk
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Annet språk
Fler språk
Utmatningsformat
  • html
  • text
  • asciidoc
  • rtf