Endre søk
Begrens søket
891011121314 501 - 550 of 664
RefereraExporteraLink til resultatlisten
Permanent link
Referera
Referensformat
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Annet format
Fler format
Språk
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Annet språk
Fler språk
Utmatningsformat
  • html
  • text
  • asciidoc
  • rtf
Treff pr side
  • 5
  • 10
  • 20
  • 50
  • 100
  • 250
Sortering
  • Standard (Relevans)
  • Forfatter A-Ø
  • Forfatter Ø-A
  • Tittel A-Ø
  • Tittel Ø-A
  • Type publikasjon A-Ø
  • Type publikasjon Ø-A
  • Eldste først
  • Nyeste først
  • Skapad (Eldste først)
  • Skapad (Nyeste først)
  • Senast uppdaterad (Eldste først)
  • Senast uppdaterad (Nyeste først)
  • Disputationsdatum (tidligste først)
  • Disputationsdatum (siste først)
  • Standard (Relevans)
  • Forfatter A-Ø
  • Forfatter Ø-A
  • Tittel A-Ø
  • Tittel Ø-A
  • Type publikasjon A-Ø
  • Type publikasjon Ø-A
  • Eldste først
  • Nyeste først
  • Skapad (Eldste først)
  • Skapad (Nyeste først)
  • Senast uppdaterad (Eldste først)
  • Senast uppdaterad (Nyeste først)
  • Disputationsdatum (tidligste først)
  • Disputationsdatum (siste først)
Merk
Maxantalet träffar du kan exportera från sökgränssnittet är 250. Vid större uttag använd dig av utsökningar.
  • 501.
    Said Tahirshah, Farid
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för programvaruteknik.
    Comparison between Progressive Web App and Regular Web App2019Independent thesis Basic level (degree of Bachelor), 10 poäng / 15 hpOppgave
    Abstract [en]

    In 2015 the term Progressive Web Application was coined to describe applications that are getting the advantage of all Progressive App features. Some of the essential features are offline support, app-like interface, secure connection, etc. Since then, case studies from PWA’s implementation showed optimistic promises for improving web page performance, time spent on site, user engagement, etc. The goal of this report is to analyze some of the effects of PWA. This work will investigate the browser compatibility of PWA’s features, compare and analyze performance and memory consumption effect of PWA’s features compared to Regular WebApp. Results showed a lot of the features of PWA are still not sup-ported by some major browsers. Performance benchmark showed that required https connection for PWA is slowing down all of the PWA’s performance metrics on the first visit. On a repeat visit, some of the PWA features like speed index is outperforming the Regular Web App. Memory consumption on PWA increased more than 2 times the size of RWA. The conclusion is that even if some features are not directly supported by browsers, they still might have workaround solutions. PWA is slower than regular web app if https on your web server is not optimized. Different browsers have different memory limitations for PWA caches. You should implement https and PWA features only if you have HTTP/2 support on your web server, otherwise, performance can decrease.

    Fulltekst (pdf)
    Comparison between Progressive Web App and Regular Web App
  • 502.
    Salleh, Norsaremah
    et al.
    International Islamic University Malaysia, MYS.
    Mendes, Fabiana
    Oulun Yliopisto, FIN.
    Mendes, Emilia
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för programvaruteknik.
    A Systematic Mapping Study of Value-Based Software Engineering2019Inngår i: Proceedings - 45th Euromicro Conference on Software Engineering and Advanced Applications, SEAA 2019, Institute of Electrical and Electronics Engineers Inc. , 2019, s. 404-411Konferansepaper (Fagfellevurdert)
    Abstract [en]

    Integrating value-oriented perspectives into the principles and practices of software engineering is critical to ensure that software development and management activities address all key stakeholders' views and also balance short-and-long-term goals. This is put forward in the discipline of Value-Based Software Engineering (VBSE). In this paper, a mapping study of VBSE is detailed. We classify evidence on VBSE principles and practices, research methods, and the research types. This mapping study includes 134 studies located from online searches, and backward snowballing of references. Our results show that VB Requirements Engineering (22%) and VB Planning and Control (19%) were the two principles and practices mostly investigated in the VBSE literature, whereas VB Risk Management, VB People Management and Value Creation (3% respectively) were the three less researched. In terms of the research method, the most commonly employed method is case-study research. In terms of research types, most of the studies (28%) proposed solution technique(s) without empirical validation. © 2019 IEEE.

  • 503.
    Sandberg, Emil
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för programvaruteknik.
    Creative Coding on the Web in p5.js: A Library Where JavaScript Meets Processing2019Independent thesis Basic level (degree of Bachelor), 10 poäng / 15 hpOppgave
    Abstract [en]

    Creative coding is the practice of writing code primarily for an expressive purpose rather than a functional one. It is mostly used in creative arts contexts. One of the most popular tools in creative coding is Processing. Processing is a desktop application and in recent years a web-based alternative named p5.js has been developed.

    This thesis investigates the p5.js JavaScript library. It looks at what can be accomplished with it and in which cases it might be used. The main focus is on the pros and cons of using p5.js for web graphics. Another point of focus is on how the web can be used as a creative platform with tools like p5.js. The goals are to provide an overview of p5.js and an evaluation of the p5.js library as a tool for creating interactive graphics and animations on the web.

    The research focuses on comparing p5.js with plain JavaScript from usability and performance perspectives and making general comparisons with other web-based frameworks for creative coding. The methods are a survey and interviews with members of creative coding communities, as well as performing coding experiments in p5.js and plain JavaScript and comparing the results and the process.

    The results from the coding experiments show that compared to plain JavaScript p5.js is easier to get started with, it is more intuitive, and code created in p5.js is easier to read. On the other hand, p5.js performs worse, especially when continuously drawing large amounts of elements to the screen. This is further supported by the survey and the interviews, which show that p5.js is liked for its usability, but that its performance issues and lack of advanced features mean that it is usually not considered for professional projects. The primary use case for p5.js is creating quick, visual prototypes. At the same time, the interviews show that p5.js has been used in a variety of contexts, both creative and practical.

    p5.js is a good library for getting started with coding creatively in the browser and is an excellent choice for experimenting and creating prototypes quickly. Should project requirements be much more advanced than that, there might be other options that will work better.

    Fulltekst (pdf)
    BTH2019Sandberg
  • 504.
    Santos, Rodrigo
    et al.
    Fed Univ State Rio de Janeiro, BRA.
    Teixeira, Eldanae
    Univ Fed Rio de Janeiro, BRA.
    Mendes, Emilia
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik. Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för programvaruteknik.
    McGregor, John
    Clemson Univ, USA.
    2nd Workshop on Social, Human, and Economic Aspects of Software (WASHES) Special Edition for Software Reuse2017Inngår i: MASTERING SCALE AND COMPLEXITY IN SOFTWARE REUSE (ICSR 2017) / [ed] Botterweck, G Werner, C, SPRINGER INTERNATIONAL PUBLISHING AG , 2017, s. 223-224Konferansepaper (Fagfellevurdert)
    Abstract [en]

    The Special Edition for Software Reuse of the Workshop on Social, Human, and Economic Aspects of Software (WASHES) aims at bringing together researchers and practitioners who are interested in social, human, and economic aspects of software. WASHES is a forum to discuss models, methods, techniques, and tools to achieve software quality, improve reuse and deal with the existing issues in this context. This special edition's main topic is "Challenges of Reuse and the Social, Human, and Economic Aspects of Software". We believe it is important to investigate software reuse beyond the technical perspective and understand how the non-technical barriers of reuse affect practices, processes and tools in practice.

  • 505.
    Santoso, Ario
    et al.
    University of Innsbruck, AUT.
    Felderer, Michael
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för programvaruteknik.
    Specification-driven predictive business process monitoring2019Inngår i: Software and Systems Modeling, ISSN 1619-1366, E-ISSN 1619-1374Artikkel i tidsskrift (Fagfellevurdert)
    Abstract [en]

    Predictive analysis in business process monitoring aims at forecasting the future information of a running business process. The prediction is typically made based on the model extracted from historical process execution logs (event logs). In practice, different business domains might require different kinds of predictions. Hence, it is important to have a means for properly specifying the desired prediction tasks, and a mechanism to deal with these various prediction tasks. Although there have been many studies in this area, they mostly focus on a specific prediction task. This work introduces a language for specifying the desired prediction tasks, and this language allows us to express various kinds of prediction tasks. This work also presents a mechanism for automatically creating the corresponding prediction model based on the given specification. Differently from previous studies, instead of focusing on a particular prediction task, we present an approach to deal with various prediction tasks based on the given specification of the desired prediction tasks. We also provide an implementation of the approach which is used to conduct experiments using real-life event logs. © 2019, The Author(s).

    Fulltekst (pdf)
    Specification-driven predictive business process monitoring
  • 506.
    Sathi, Veer Reddy
    et al.
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för programvaruteknik.
    Ramanujapura, Jai Simha
    A Quality Criteria Based Evaluation of Topic Models2016Independent thesis Advanced level (degree of Master (Two Years)), 20 poäng / 30 hpOppgave
    Abstract [en]

    Context. Software testing is the process, where a particular software product, or a system is executed, in order to find out the bugs, or issues which may otherwise degrade its performance. Software testing is usually done based on pre-defined test cases. A test case can be defined as a set of terms, or conditions that are used by the software testers to determine, if a particular system that is under test operates as it is supposed to or not. However, in numerous situations, test cases can be so many that executing each and every test case is practically impossible, as there may be many constraints. This causes the testers to prioritize the functions that are to be tested. This is where the ability of topic models can be exploited. Topic models are unsupervised machine learning algorithms that can explore large corpora of data, and classify them by identifying the hidden thematic structure in those corpora. Using topic models for test case prioritization can save a lot of time and resources.

    Objectives. In our study, we provide an overview of the amount of research that has been done in relation to topic models. We want to uncover various quality criteria, evaluation methods, and metrics that can be used to evaluate the topic models. Furthermore, we would also like to compare the performance of two topic models that are optimized for different quality criteria, on a particular interpretability task, and thereby determine the topic model that produces the best results for that task.

    Methods. A systematic mapping study was performed to gain an overview of the previous research that has been done on the evaluation of topic models. The mapping study focused on identifying quality criteria, evaluation methods, and metrics that have been used to evaluate topic models. The results of mapping study were then used to identify the most used quality criteria. The evaluation methods related to those criteria were then used to generate two optimized topic models. An experiment was conducted, where the topics generated from those two topic models were provided to a group of 20 subjects. The task was designed, so as to evaluate the interpretability of the generated topics. The performance of the two topic models was then compared by using the Precision, Recall, and F-measure.

    Results. Based on the results obtained from the mapping study, Latent Dirichlet Allocation (LDA) was found to be the most widely used topic model. Two LDA topic models were created, optimizing one for the quality criterion Generalizability (TG), and one for Interpretability (TI); using the Perplexity, and Point-wise Mutual Information (PMI) measures respectively. For the selected metrics, TI showed better performance, in Precision and F-measure, than TG. However, the performance of both TI and TG was comparable in case of Recall. The total run time of TI was also found to be significantly high than TG. The run time of TI was 46 hours, and 35 minutes, whereas for TG it was 3 hours, and 30 minutes.Conclusions. Looking at the F-measure, it can be concluded that the interpretability topic model (TI) performs better than the generalizability topic model (TG). However, while TI performed better in precision,

    Conclusions. Looking at the F-measure, it can be concluded that the interpretability topic model (TI) performs better than the generalizability topic model (TG). However, while TI performed better in precision, recall was comparable. Furthermore, the computational cost to create TI is significantly higher than for TG. Hence, we conclude that, the selection of the topic model optimization should be based on the aim of the task the model is used for. If the task requires high interpretability of the model, and precision is important, such as for the prioritization of test cases based on content, then TI would be the right choice, provided time is not a limiting factor. However, if the task aims at generating topics that provide a basic understanding of the concepts (i.e., interpretability is not a high priority), then TG is the most suitable choice; thus making it more suitable for time critical tasks.

    Fulltekst (pdf)
    fulltext
  • 507.
    Sauerwein, Clemens
    et al.
    University of Innsbruck, AUT.
    Pekaric, Irdin
    University of Innsbruck, AUT.
    Felderer, Michael
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för programvaruteknik.
    Breu, Ruth
    University of Innsbruck, AUT.
    An Analysis and Classification of Public Information Security Data Sources used in Research and Practice2019Inngår i: Computers & security (Print), ISSN 0167-4048, E-ISSN 1872-6208, Vol. 82, s. 140-155Artikkel i tidsskrift (Fagfellevurdert)
    Abstract [en]

    In order to counteract today’s sophisticated and increasing number of cyber threats the timely acquisition of information regarding vulnerabilities, attacks, threats, countermeasures and risks is crucial. Therefore, employees tasked with information security risk management processes rely on a variety of information security data sources, ranging from inter-organizational threat intelligence sharing platforms to public information security data sources, such as mailing lists or expert blogs. However, research and practice lack a comprehensive overview about these public information security data sources, their characteristics and dependencies. Moreover, comprehensive knowledge about these sources would be beneficial to systematically use and integrate them to information security processes. In this paper, a triangulation study is conducted to identify and analyze public information security data sources. Furthermore, a taxonomy is introduced to classify and compare these data sources based on the following six dimensions: (1) Type of information, (2) Integrability, (3) Timeliness, (4) Originality, (5) Type of Source,and (6) Trustworthiness. In total, 68 public information security data sources were identified and classified. The investigations showed that research and practice rely on a large variety of heterogeneous information security data sources, which makes it more difficult to integrate and use them for information security and risk management processes.

  • 508.
    Schlick, Rupert
    et al.
    Austrian Institute of Technology, AUT.
    Felderer, Michael
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för programvaruteknik.
    Majzik, Istvan
    Budapest University of Technology and Economics, HUN.
    Nardone, Roberto
    Universita degli Studi di Napoli Federico II, ITA.
    Raschke, Alexander
    Universitat Ulm, DEU.
    Snook, Colin
    University of Southampton, GBR.
    Vittorini, Valeria
    Universita degli Studi di Napoli Federico II, ITA.
    A proposal of an example and experiments repository to foster industrial adoption of formal methods2018Inngår i: Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), Springer Verlag , 2018, Vol. 11247, s. 249-272Konferansepaper (Fagfellevurdert)
    Abstract [en]

    Formal methods (in a broad sense) have been around almost since the beginning of computer science. Nonetheless, there is a perception in the formal methods community that take-up by industry is low considering the potential benefits. We take a look at possible reasons and give candidate explanations for this effect. To address the issue, we propose a repository of industry-relevant example problems with an accompanying open data storage for experiment results in order to document, disseminate and compare exemplary solutions from formal model based methods. This would allow potential users from industry to better understand the available solutions and to more easily select and adopt a formal method that fits their needs. At the same time, it would foster the adoption of open data and good scientific practice in this research field. © Springer Nature Switzerland AG 2018.

  • 509.
    Seidi, Nahid
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för programvaruteknik.
    Document-Based Databases In Platform SW Architecture For Safety Related Embedded System2014Independent thesis Basic level (degree of Bachelor)Oppgave
    Abstract [en]

    The project is about the investigation on Document-Based databases, their evaluation criteria and use cases regarding requirements management, SW architecture and test management to set up an (ESLM) Embedded Systems Lifecycle Management tool. The current database used in the ESLM is a graph database called Neo4j, which meets the needs of the current system. The result of studying Document databases turned to the decision of not using a Document database for the system. Instead regarding the requirements, a combination of Graph database and Document database could be the practical solution in future.

    Fulltekst (pdf)
    FULLTEXT01
  • 510.
    Selander, Nizar
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för programvaruteknik. Ericsson.
    Resource utilization comparison of Cassandra and Elasticsearch2019Independent thesis Basic level (degree of Bachelor), 10 poäng / 15 hpOppgave
    Abstract [en]

    Elasticsearch and Cassandra are two of the widely used databases today withElasticsearch showing a more recent resurgence due to its unique full text searchfeature, akin to that of a search engine, contrasting with the conventional querylanguage-based methods used to perform data searching and retrieval operations.

    The demand for more powerful and better performing yet more feature rich andflexible databases has ever been growing. This project attempts to study how the twodatabases perform under a specific workload of 2,000,000 fixed sized logs and underan environment where the two can be compared while maintaining the results of theexperiment meaningful for the production environment which they are intended for.

    A total of three benchmarks were carried, an Elasticsearch deployment using defaultconfiguration and two Cassandra deployments, a default configuration a long with amodified one which reflects a currently running configuration in production for thetask at hand.

    The benchmarks showed very interesting performance differences in terms of CPU,memory and disk space usage. Elasticsearch showed the best performance overallusing significantly less memory and disk space as well as CPU to some degree.

    However, the benchmarks were done in a very specific set of configurations and a veryspecific data set and workload. Those differences should be considered whencomparing the benchmark results.

    Fulltekst (pdf)
    Resource utilization comparison of Cassandra and Elasticsearch
  • 511.
    Selvi, Mehmet
    et al.
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för programvaruteknik.
    Büyükcan, Güral
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för programvaruteknik.
    Influential factors affecting the undesired fault correction outcomes in large-scaled companies2014Independent thesis Advanced level (degree of Master (Two Years))Oppgave
    Abstract [en]

    Context. Fault correction process is one of the two main activities in software evolution model. As it is very important for software maintainability, software industry especially large-scaled global companies, aim to have mature fault correction processes that detect faults and correct them in a continuous and efficient way. Considerable amount of effort is needed and some measures should be taken in order to be successful. This master thesis is mainly related with fault correction and finding possible solutions for better process. Objectives. The main aim of this study is to investigate and identify influential factors having affects over undesired fault correction outcomes. This study has three main stages: 1) to identify factors from company data that have affects over target factors, 2) to elicit influential factors from interviews and literature review, 3) to prioritize influential factors based on their significance. Based on the outcomes, giving recommendations to company and software industry is the other aim of this master thesis. Methods. This study mainly reflects the empirical research of software fault correction process and undesired outcomes of it. In this master thesis, both quantitative and qualitative data analysis were performed. Case study was conducted with Ericsson AB that data analysis was made with the archival data by using several methods including Machine Learning and Apriori. Also, surveys and semi-structured interviews were used for data collection instruments. Apart from this, literature review was performed in order to collect influential factors for fault correction process. Prioritization of the influential factors was made by using hierarchical cumulative voting. Results. Throughout the case study, quantitative data analysis, interviews and literature review was conducted and totally 45 influential factors were identified. By using these factors prioritization was performed with 26 practitioners (4 internal and 22 external) in order to find which factors are most a) significant and b) relevant in undesired fault correction outcomes. Based on the outcomes of prioritization, cause-effect diagram was drawn which includes all the important factors. Conclusions. This research showed that there are lots of factors influencing fault correction process. The practitioners mostly complained about the lack of analysis of deeply including correction of faults are not resulted the new requirements and they are not used for process improvement. Also, limited resources (such as work force, vacations and sickness), unbalanced fault correction task assignment and too much fault reports at the same time cause problems. Moreover, priorities of faults and customers affect the lead time of fault correction process as the critical faults are fixed at first.

    Fulltekst (pdf)
    FULLTEXT01
  • 512.
    Sentilles, Severine
    et al.
    Malardalen Univ, SWE.
    Papatheocharous, Efi
    Swedish Inst Comp Sci, SWE.
    Ciccozzi, Federico
    Malardalen Univ, SWE.
    Petersen, Kai
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för programvaruteknik.
    A Property Model Ontology2016Inngår i: 2016 42ND EUROMICRO CONFERENCE ON SOFTWARE ENGINEERING AND ADVANCED APPLICATIONS (SEAA), 2016, s. 165-172Konferansepaper (Fagfellevurdert)
    Abstract [en]

    Efficient development of high quality software is tightly coupled to the ability of quickly taking complex decisions based on trustworthy facts. In component-based software engineering, the decisions related to selecting the most suitable component among functionally-equivalent ones are of paramount importance. Despite sharing the same functionality, components differ in terms of their extra-functional properties. Therefore, to make informed selections, it is crucial to evaluate extra-functional properties in a systematic way. To date, many properties and evaluation methods that are not necessarily compatible with each other exist. The property model ontology presented in this paper represents the first step towards providing a systematic way to describe extra-functional properties and their evaluation methods, and thus making them comparable. This is beneficial from two perspectives. First, it aids researchers in identifying comparable property models as a guide for empirical evaluations. Second, practitioners are supported in choosing among alternative evaluation methods for the properties of their interest. The use of the ontology is illustrated by instantiating a subset of property models relevant in the automotive domain.

  • 513.
    Settenvini, Matteo
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för programvaruteknik.
    Algorithmic Analysis of Name-Bounded Programs: From Java programs to Petri Nets via π-calculus2014Independent thesis Advanced level (degree of Master (Two Years))Oppgave
    Abstract [en]

    Context. Name-bounded analysis is a type of static analysis that allows us to take a concurrent program, abstract away from it, and check for some interesting properties, such as deadlock-freedom, or watching the propagation of variables across different components or layers of the system. Objectives. In this study we investigate the difficulties of giving a representation of computer programs in a name-bounded variation of π-calculus. Methods. A preliminary literature review is conducted to assess the presence (or lack thereof) of other successful translations from real-world programming languages to π-calculus, as well for the presence of relevant prior art in the modelling of concurrent systems. Results. This thesis gives a novel translation going from a relevant subset of the Java programming language, to its corresponding name-bounded π-calculus equivalent. In particular, the strengths of our translation are being able to dispose of names representing inactive objects when there are no circular references, and a transparent handling of polymorphism and dynamic method resolution. The resulting processes can then be further transformed into their Petri-Net representation, enabling us to check for important properties, such as reachability and coverability of program states. Conclusions. We conclude that some important properties that are not, in general, easy to check for concurrent programs, can be in fact be feasibly determined by giving a more constrained model in π-calculus first, and as Petri Nets afterwards.

    Fulltekst (pdf)
    FULLTEXT01
  • 514.
    Seyff, Norbert
    et al.
    University of Applied Sciences and Arts Northwestern Switzerland, CHE.
    Stade, Melanie
    University of Applied Sciences and Arts Northwestern Switzerland, CHE.
    Fotrousi, Farnaz
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för programvaruteknik.
    Glinz, Martin
    University of Zurich, CHE.
    Guzman, Emitza
    University of Zurich, CHE.
    Kolpondinos-Huber, Martina
    University of Zurich, CHE.
    Arzapalo, Denisse Muñante
    Fondazione Bruno Kessler, ITA.
    Oriol, Marc
    Universitat Politècnica de Catalunya, ESP.
    Schaniel, Ronnie
    University of Applied Sciences and Arts Northwestern Switzerland, CHE.
    End-user driven feedback prioritization2017Inngår i: CEUR Workshop Proceedings / [ed] Ameller D.,Dieste O.,Knauss E.,Susi A.,Dalpiaz F.,Kifetew F.M.,Tenbergen B.,Palomares C.,Seffah A.,Forbrig P.,Berry D.M.,Daneva M.,Knauss A.,Siena A.,Daun M.,Herrmann A.,Kirikova M.,Groen E.C.,Horkoff J.,Maeder P.,Massacci F.,Ralyte J., CEUR-WS , 2017, Vol. 1796Konferansepaper (Fagfellevurdert)
    Abstract [en]

    End-user feedback is becoming more important for the evolution of software systems. There exist various communication channels for end-users (app stores, social networks) which allow them to express their experiences and requirements regarding a software application. End-users communicate a large amount of feedback via these channels which leads to open issues regarding the use of end-user feedback for software development, maintenance and evolution. This includes investigating how to identify relevant feedback scattered across different feedback channels and how to determine the priority of the feedback issues communicated. In this research preview paper, we discuss ideas for enduser driven feedback prioritization. © Copyright 2017 for this paper by its authors.

  • 515.
    shafiq, Hafiz Adnan
    et al.
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för programvaruteknik.
    Arshad, Zaki
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för programvaruteknik.
    Automated Debugging and Bug Fixing Solutions: A Systematic Literature Review and Classification2013Independent thesis Advanced level (degree of Master (Two Years))Oppgave
    Abstract [en]

    Context: Bug fixing is the process of ensuring correct source code and is done by developer. Automated debugging and bug fixing solutions minimize human intervention and hence minimize the chance of producing new bugs in the corrected program. Scope and Objectives: In this study we performed a detailed systematic literature review. The scope of work is to identify all those solutions that correct software automatically or semi-automatically. Solutions for automatic correction of software do not need human intervention while semi-automatic solutions facilitate a developer in fixing a bug. We aim to gather all such solutions to fix bugs in design, i.e., code, UML design, algorithms and software architecture. Automated detection, isolation and localization of bug are not in our scope. Moreover, we are only concerned with software bugs and excluding hardware and networking domains. Methods: A detailed systematic literature review (SLR) has been performed. A number of bibliographic sources are searched, including Inspec, IEEE Xplore, ACM digital library, Scopus, Springer Link and Google Scholar. Inclusion/exclusion, study quality assessment, data extraction and synthesis have been performed in depth according to guidelines provided for performing SLR. Grounded theory is used to analyze literature data. To check agreement level between two researchers, Kappa analysis is used. Results: Through SLR we identified 46 techniques. These techniques are classified in automated/semi-automated debugging and bug fixing. Strengths and weaknesses of each of them are identified, along with which types of bugs each can fix and in which language they can be implement. In the end, classification is performed which generate a list of approaches, techniques, tools, frameworks, methods and systems. Along, this classification and categorization we separated bug fixing and debugging on the bases of search algorithms. Conclusion: In conclusion achieved results are all automated/semi-automated debugging and bug fixing solutions that are available in literature. The strengths/benefits and weaknesses/limitations of these solutions are identified. We also recognize type of bugs that can be fixed using these solutions. And those programming languages in which these solutions can be implemented are discovered as well. In the end a detail classification is performed.

    Fulltekst (pdf)
    FULLTEXT01
  • 516. Shah, Syed Muhammad Ali
    et al.
    Alvi, Usman Sattar
    Gencel, Cigdem
    Petersen, Kai
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för programvaruteknik.
    Comparing a Hybrid Testing Process with Scripted and Exploratory Testing: An Experimental Study with Practitioners2014Konferansepaper (Fagfellevurdert)
    Abstract [en]

    This study presents an experimental study comparing the testing quality of a Hybrid Testing (HT) process with the commonly used approaches in industry: Scripted Testing (ST) and Exploratory Testing (ET). The study was conducted in an international IT service company in Sweden with the involvement of six experienced testers. Two measures were used for comparison: 1) defect detection effectiveness (DDE) and 2) functionality coverage (FC). The results indicated that HT performed better in terms of DDE than ST and worse than ET. In terms of FC, HT performed better than ET, while no significant differences were observed between the HT and ST. Furthermore, HT performed best for experienced testers, but worse with less experienced testers.

    Fulltekst (pdf)
    FULLTEXT01
  • 517.
    Shojaifar, Alireza
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för programvaruteknik.
    Evaluation and Improvement of the RSSI-based Localization Algorithm: Received Signal Strength Indication (RSSI)2015Independent thesis Advanced level (degree of Master (Two Years)), 20 poäng / 30 hpOppgave
    Abstract [en]

    Context: Wireless Sensor Networks (WSN) are applied to collect information by distributed sensor nodes (anchors) that are usually in fixed positions. Localization (estimating the location of objects) of moving sensors, devices or people which recognizes the location’s information of a moving object is one of the essential WSN services and main requirement. To find the location of a moving object, some of algorithms are based on RSSI (Received Signal Strength Indication). Since very accurate localization is not always feasible (cost, complexity and energy issues) requirement, RSSI-based method is a solution. This method has two specific features: it does not require extra hardware (cost and energy aspects) and theoretically RSSI is a function of distance.

    Objectives: In this thesis firstly, we develop an RSSI-based localization algorithm (server side application) to find the position of a moving object (target node) in different situations. These situations are defined in different experiments so that we observe and compare the results (finding accurate positioning). Secondly, since RSSI characteristic is highly related to the environment that an experiment is done in (moving, obstacles, temperature, humidity …) the importance and contribution of “environmental condition” in the empirical papers is studied.

    Methods: The first method which is a common LR (Literature Review) is carried out to find out general information about localization algorithms in (WSN) with focus on the RSSI-based method. This LR is based on papers and literature that are prepared by the collaborating company, the supervisor and also ad-hoc search in scientific IEEE database. By this method as well as relevant information, theoretical algorithm (mathematical function) and different effective parameters of the RSSI-based algorithm are defined. The second method is experimentation that is based on development of the mentioned algorithm (since experiment is usually performed in development, evaluation and problem solving research). Now, because we want to compare and evaluate results of the experiments with respect to environmental condition effect, the third method starts. The third method is SMS (Systematic mapping Study) that essentially focuses on the contribution of “environmental condition” effect in the empirical papers.

    Results: The results of 30 experiments and their analyses show a highly correlation between the RSSI values and environmental conditions. Also, the results of the experiments indicate that a direct signal path between a target node and anchors can improve the localization’s accuracy. Finally, the experiments’ results present that the target node’s antenna type has a clear effect on the RSSI values and in consequence distance measurement error. Our findings in the mapping study reveal that although there are a lot of studies about accuracy requirement in the context of the RSSI-based localization, there is a lack of research on the other localization requirements such as performance, reliability and stability. Also, there are a few studies which considered the RSSI localization in a real world condition.

    Conclusion: This thesis studies various localization methods and techniques in WSNs. Then, the thesis focuses on the RSSI-based localization by implementing one algorithm and analyzing the experiments’ results. In our experiments, we mostly focus on environmental parameters that affect localization’s accuracy. Moreover, we indicate some areas of research in this context which need more studies.

    Fulltekst (pdf)
    fulltext
  • 518.
    Shojaifar, Alireza
    et al.
    Fachhochschule Nordwestschweiz, CHE.
    Fricker, Samuel
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för programvaruteknik.
    Gwerder, Martin
    Fachhochschule Nordwestschweiz, CHE.
    Elicitation of SME requirements for cybersecurity solutions by studying adherence to recommendations2018Inngår i: CEUR Workshop Proceedings / [ed] Dalpiaz F.,Franch X.,Kirikova M.,Ralyte J.,Spoletini P.,Chisik Y.,Ferrari A.,Madhavji N.,Palomares C.,Sabetzadeh M.,van der Linden D.,Schmid K.,Charrada E.B.,Sawyer P.,Forbrig P.,Zamansky A., CEUR-WS , 2018, Vol. 2075Konferansepaper (Fagfellevurdert)
    Abstract [en]

    [Context and motivation] Small and medium-sized enterprises (SME) have become the weak spot of our economy for cyber attacks. These companies are large in number and often do not have the controls in place to prevent successful attacks, respectively are not prepared to systematically manage their cybersecurity capabilities. [Question/problem] One of the reasons for why many SME do not adopt cybersecurity is that developers of cybersecurity solutions understand little the SME context and the requirements for successful use of these solutions. [Principal ideas/results] We elicit requirements by studying how cybersecurity experts provide advice to SME. The experts' recommendations offer insights into what important capabilities of the solution are and how these capabilities ought to be used for mitigating cybersecurity threats. The adoption of a recommendation hints at a correct match of the solution, hence successful consideration of requirements. Abandoned recommendations point to a misalignment that can be used as a source to inquire missed requirements. Re-occurrence of adoption or abandonment decisions corroborate the presence of requirements. [Contributions] This poster describes the challenges of SME regarding cybersecurity and introduces our proposed approach to elicit requirements for cybersecurity solutions. The poster describes CYSEC, our tool used to capture cybersecurity advice and help to scale cybersecurity requirements elicitation to a large number of participating SME. We conclude by outlining the planned research to develop and validate CYSEC1 Copyright 2018 for this paper by its authors.

  • 519.
    Sillaber, Christian
    et al.
    University of Innsbruck, AUT.
    Waltl, Bernhard
    Technical University of Munich, DEU.
    Treiblmaier, Horst
    MODUL University Vienna, AUT.
    Gallersdörfer, Ulrich
    Technical University of Munich, DEU.
    Felderer, Michael
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för programvaruteknik.
    Laying the foundation for smart contract development: an integrated engineering process model2020Inngår i: Information Systems and E-Business Management, ISSN 1617-9846, E-ISSN 1617-9854Artikkel i tidsskrift (Fagfellevurdert)
    Abstract [en]

    Smart contracts are seen as the major building blocks for future autonomous blockchain- and Distributed Ledger Technology (DLT)-based applications. Engineering such contracts for trustless, append-only, and decentralized digital ledgers allows mutually distrustful parties to transform legal requirements into immutable and formalized rules. Previous experience shows this to be a challenging task due to demanding socio-technical ecosystems and the specificities of decentralized ledger technology. In this paper, we therefore develop an integrated process model for engineering DLT-based smart contracts that accounts for the specificities of DLT. This model was iteratively refined with the support of industry experts. The model explicitly accounts for the immutability of the trustless, append-only, and decentralized DLT ecosystem, and thereby overcomes certain limitations of traditional software engineering process models. More specifically, it consists of five successive and closely intertwined phases: conceptualization, implementation, approval, execution, and finalization. For each phase, the respective activities, roles, and artifacts are identified and discussed in detail. Applying such a model when engineering smart contracts will help software engineers and developers to better understand and streamline the engineering process of DLTs in general and blockchain in particular. Furthermore, this model serves as a generic framework which will support application development in all fields in which DLT can be applied. © 2020, The Author(s).

  • 520.
    Silva, Dennis
    et al.
    Universidade Federal do Piaui, BRA.
    Rabelo, Ricardo
    Universidade Federal do Piaui, BRA.
    Campanha, Matheus
    Universidade Federal do Piaui, BRA.
    Neto, Pedro Santos
    Universidade Federal do Piaui, BRA.
    Oliveira, Pedro Almir
    Instituto Federal do Maranhão, BRA.
    Britto, Ricardo
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för programvaruteknik.
    A hybrid approach for test case prioritization and selection2016Inngår i: 2016 IEEE Congress on Evolutionary Computation, CEC 2016, IEEE, 2016, s. 4508-4515Konferansepaper (Fagfellevurdert)
    Abstract [en]

    Software testing consists in the dynamic verification of the behavior of a program on a set of test cases. When a program is modified, it must be tested to verify if the changes did not imply undesirable effects on its functionality. The rerunning of all test cases can be impossible, due to cost, time and resource constraints. So, it is required the creation of a test cases subset before the test execution. This is a hard problem and the use of standard Software Engineering techniques could not be suitable. This work presents an approach for test case prioritization and selection, based in relevant inputs obtained from a software development environment. The approach uses Software Quality Function Deployment (SQFD) to deploy the features relevance among the system components, Mamdani fuzzy inference systems to infer the criticality of each class and Ant Colony Optimization to select test cases. An evaluation of the approach is presented, using data from simulations with different number of tests.

  • 521.
    Silva, Dennis Savio
    et al.
    Federal University of Piauí, BRA.
    Rabelo, Ricardo De Andrade Lira
    Federal University of Piauí, BRA.
    Neto, Pedro Santos
    Federal University of Piauí, BRA.
    Britto, Ricardo
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för programvaruteknik.
    Oliveira, Pedro Almir
    Federal Institute of Maranhao, BRA.
    A test case prioritization approach based on software component metrics2019Inngår i: IEEE International Conference on Systems Man and Cybernetics Conference Proceedings, Institute of Electrical and Electronics Engineers Inc. , 2019, s. 2939-2945Konferansepaper (Fagfellevurdert)
    Abstract [en]

    The most common way of performing regression testing is by executing all test cases associated with a software system. However, this approach is not scalable since time and cost to execute the test cases increase together with the system's size. A way to address this consists of prioritizing the existing test cases, aiming to maximize a test suite's fault detection rate. To address the limitations of existing approaches, in this paper we propose a new approach to maximize the rate of fault detection of test suites. Our proposal has three steps: I) infer code components' criticality values using a fuzzy inference system; ii) calculate test cases' criticality; iii) prioritize the test cases using ant colony optimization. The test cases are prioritized considering criticality, execution time and history of faults, and the resulting test suites are evaluated according to their fault detection rate. The evaluation was performed in eight programs, and the results show that the fault detection rate of the solutions was higher than in the non-ordered test suites and ones obtained using a greedy approach, reaching the optimal value when possible to verify. A sanity check was performed, comparing the obtained results to the results of a random search. The approach performed better at significant levels of statistic and practical difference, evidencing its true applicability to the prioritization of test cases. © 2019 IEEE.

  • 522.
    Silva, Lakmal
    et al.
    Ericsson, SWE.
    Unterkalmsteiner, Michael
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för programvaruteknik.
    Wnuk, Krzysztof
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för programvaruteknik.
    Monitoring and maintenance of telecommunication systems: Challenges and research perspectives2019Inngår i: ENGINEERING SOFTWARE SYSTEMS: RESEARCH AND PRAXIS / [ed] Kosiuczenko, P; Zielinski, Z, Springer Verlag , 2019, 830, Vol. 830, s. 166-172Konferansepaper (Fagfellevurdert)
    Abstract [en]

    In this paper, we present challenges associated with monitoring and maintaining a large telecom system at Ericsson that was developed with high degree of component reuse. The system constitutes of multiple services, composed of both legacy and modern systems that are constantly changing and need to be adapted to changing business needs. The paper is based on firsthand experience from architecting, developing and maintaining such a system, pointing out current challenges and potential avenues for future research that might contribute to addressing them. © Springer Nature Switzerland AG 2019.

  • 523.
    Silvander, Johan
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för programvaruteknik.
    Business Process Optimization with Reinforcement Learning2019Inngår i: Lect. Notes Bus. Inf. Process., Springer Verlag , 2019, Vol. 356, s. 203-212Konferansepaper (Fagfellevurdert)
    Abstract [en]

    We investigate the use of deep reinforcement learning to optimize business processes in a business support system. The focus of this paper is to investigate how a reinforcement learning algorithm named Q-Learning, using deep learning, can be configured in order to support optimization of business processes in an environment which includes some degree of uncertainty. We make the investigation possible by implementing a software agent with the help of a deep learning tool set. The study shows that reinforcement learning is a useful technique for business process optimization but more guidance regarding parameter setting is needed in this area. © 2019, Springer Nature Switzerland AG.

  • 524.
    Silvander, Johan
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för programvaruteknik.
    Component Selection with Fuzzy Decision Making2018Inngår i: Procedia Computer Science, Elsevier B.V. , 2018, Vol. 126, s. 1378-1386Konferansepaper (Fagfellevurdert)
    Abstract [en]

    In many situations a decision maker (DM) would like to grade a component, or rank several components of the same type. Often a component type has many features, which are deemed as valuable by the DM. Other vital features are not known by the DM but are needed for the component to function. However, it should be possible to guide the DM to find the desired business solution, without putting a requirement of detailed knowledge of the component type on the DM. We propose a framework for component selection with the help of fuzzy decision making. The work is based on algorithms from fuzzy decision making, which we have adapted or extended. The framework was validated by practitioners, which found the framework useful. © 2018 The Author(s).

    Fulltekst (pdf)
    fulltext
  • 525.
    Silvander, Johan
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för programvaruteknik.
    Towards Intent-Driven Systems2017Licentiatavhandling, med artikler (Annet vitenskapelig)
    Abstract [en]

    Context: Software supporting an enterprise’s business, also known as a business support system, needs to support the correlation of activities between actors as well as influence the activities based on knowledge about the value networks in which the enterprise acts. This can be supported with the help of intent-driven systems. The aim of intent-driven systems is to capture stakeholders’ intents and transform these into a form that enables computer processing of them. Only then are different machine actors able to negotiate with each other on behalf of their respective stakeholders and their intents, and suggest a mutually beneficial agreement.

    Objective: When building a business support system it is critical to separate the business model of the business support system itself from the business models used by the enterprise which is using the business support system. The core idea of intent-driven systems is the possibility to change behavior of the system itself, based on stakeholder intents. This requires a separation of concerns between the parts of the system used to execute the stakeholder business, and the parts which are used to design the business based on stakeholder intents. The business studio is a software that supports the realization of business models used by the enterprise by configuring the capabilities provided by the business support system. The aim is to find out how we can support the design of a business studio which is based on intent-driven systems.

    Method: We are using the design science framework as our research frame- work. During our design science study we have used the following research methods: systematic literature review, case study, quasi experiment, and action research.

    Results: We have produced two design artifacts as a start to be able to support the design of a business studio. These artifacts are the models and quasi-experiment in Chapter 3, and the action research in Chapter 4. The models found during the case study have proved to be a valuable artifact for the stakeholder. The results from the quasi-experiment and the action research are seen as new problem solving knowledge by the stakeholder.

    Conclusion: The synthesis shows a need for further research regarding semantic interchange of information, actor interaction in intent-driven systems, and the governance of intent-driven systems.

    Fulltekst (pdf)
    fulltext
  • 526.
    Silvander, Johan
    et al.
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för programvaruteknik.
    Angelin, Lars
    Ericsson AB, SWE.
    Introducing intents to the OODA-loop2019Inngår i: Procedia Computer Science, Elsevier B.V. , 2019, s. 878-883Konferansepaper (Fagfellevurdert)
    Abstract [en]

    Together with Ericsson AB, we are using the design science framework when investigating how to create an intent-driven system for their business support system and its business studio. The aim is to present our initial results on how an extended OODA-loop can be used to realize a robust, but still flexible, software architecture for an intent-driven system. We explain how an extended OODA-loop is constructed and provide suggestions of how different part of it can be implemented. The initial results are promising but further research is needed to use the extended OODA-loop as reusable components in intent-driven systems. Our next step is to extend the generic methods with knowledge representation and reasoning capabilities. © 2019 The Author(s). Published by Elsevier B.V.

    Fulltekst (pdf)
    IntroducingintentstotheOODA-loop
  • 527.
    Silvander, Johan
    et al.
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för programvaruteknik.
    Svahnberg, Mikael
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för programvaruteknik.
    A Systematic Literature Review on Intent-Driven SystemsInngår i: Artikkel i tidsskrift (Fagfellevurdert)
    Abstract [en]

    Context: The aim of intent-driven systems is to capture stakeholders’ intents and transform these into a form that enables computer processing of the intents. Only then are different computer- based agents able to negotiate with each other on behalf of their respective stakeholders and their intents, and suggest a mutually beneficial agreement. This requires a separation of concerns between the parts of the system used to execute the stakeholder business, and the parts which are used to design the business based on stakeholder intents.

    Objective: The aim is to find out which methods/techniques as well as enabling aspects, useful for an intent-driven system, that are covered by research literature.

    Method: As a part of a design science study, a Systematic Literature Review is conducted.

    Results: The existence of methods/techniques which can be used as building blocks to construct intent-driven systems exist in the literature. How these methods/techniques can interact with the aspects needed to enabling flexible realizations of intent-driven systems is not evident in the existing literature.

    Conclusion: The synthesis shows a need for further research regarding semantic interchange of information, actor interaction in intent-driven systems, and the governance of intent-driven systems.

  • 528.
    Silvander, Johan
    et al.
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för programvaruteknik.
    Svahnberg, Mikael
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för programvaruteknik.
    Towards Executable Business Rules2017Annet (Fagfellevurdert)
    Abstract [en]

    Context:  In today's implementations of business support systems, business rules are configured in different places of the system, and in different formats. This makes it hard to have a common view of what is defined, and to execute the same logic in different parts of systems. It is desired to have a common governance structure and a standardized way of handling the business rules.

    Objective: To investigate if it is possible to support visual and logical verification of business rules and to generate executable business rules.

    Method: Together with practitioners we conducted an experiment.

    Results: We have implemented a machine learning pipe-line which supports visual and logical verification of business rules, and the generation of executable business rules. From a machine learning perspective, we have added the possibility for the ID3 algorithm to use continuous features.

    Conclusion: The experiment shows that it is possible to support visual and logical verification of business rules, and to generate executable business rules with the help of a machine learning pipe-line.

  • 529.
    Silvander, Johan
    et al.
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för programvaruteknik.
    Svahnberg, Mikael
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för programvaruteknik.
    Uncover and Assess Rule Adherence Based on Decisions2018Inngår i: Lecture Notes in Business Information Processing / [ed] Shishkov B., Springer Verlag , 2018, Vol. 319, s. 249-259Konferansepaper (Fagfellevurdert)
    Abstract [en]

    Context: Decisions taken by medical practitioners may be based on explicit and implicit rules. By uncovering these rules, a medical practitioner may have the possibility to explain its decisions in a better way, both to itself and to the person which the decision is affecting. Objective: We investigate if it is possible for a machine learning pipe-line to uncover rules used by medical practitioners, when they decide if a patient could be operated or not. The uncovered rules should have a linguistic meaning. Method: We are evaluating two different algorithms, one of them is developed by us and named “The membership detection algorithm”. The evaluation is done with the help of real-world data provided by a hospital. Results: The membership detection algorithm has significantly better relevance measure, compared to the second algorithm. Conclusion: A machine learning pipe-line, based on our algorithm, makes it possibility to give the medical practitioners an understanding, or to question, how decisions have been taken. With the help of the uncovered fuzzy decision algorithm it is possible to test suggested changes to the feature limits. © Springer International Publishing AG, part of Springer Nature 2018.

  • 530.
    Silvander, Johan
    et al.
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för programvaruteknik.
    Svahnberg, Mikael
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för programvaruteknik.
    Uncovering Implicit Rules in Medicine DiagnosisInngår i: Artikkel i tidsskrift (Fagfellevurdert)
    Abstract [en]

    Context:  Decisions taken by experts may be based on explicit and implicit rules. By uncovering the implicit rules the expert may have the possibility to explain its decisions in a better way, both for itself and the person which the decision is affecting. In the area of medicine, laws are enforcing the expert to be able to explain its decision when a patient is complaining about a decision. Another vital aspect is the ability of the expert to explain to the patient why a certain decision is taken, and the risks associated with the decision.

    Objective: To investigate if it is possible for a machine learning pipe-line to find implicit rules used by experts, when they decide if a patient could be operated or not.

    Method: We conduct an analysis of a data set, containing information about patients and the decision if an operation should be performed or not.

    Results: We have implemented a machine learning pipe-line which supports detection of implicit rules in a data set. The detection of the implicit rules are supported by an algorithm which implements an agglomerative merging of feature values. We have improved the original algorithm by showing the boarders of the feature values of a discretization bin.

    Conclusion: The analysis of the data set shows it is possible to find implicit rules used by the experts with the help of an agglomerative merging of feature values.

  • 531.
    Silvander, Johan
    et al.
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för programvaruteknik.
    Wilson, Magnus
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för programvaruteknik.
    Wnuk, Krzysztof
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för programvaruteknik.
    Encouraging Business Flexibility by Improved Context Descriptions2017Inngår i: Proceedings of the Seventh International Symposium on Business Modeling and Software Design / [ed] Boris Shishkov, SciTePress, 2017, Vol. 1, s. 225-228Konferansepaper (Fagfellevurdert)
    Abstract [en]

    Business-driven software architectures are emerging and gaining importance for many industries. As softwareintensive solutions continue to be more complex and operate in rapidly changing environments, there is a pressure for increased business flexibility realized by more efficient software architecture mechanisms to keep up with the necessary speed of change. We investigate how improved context descriptions could be implemented in software components, and support important software development practices like business modeling and requirement engineering. This paper proposes context descriptions as an architectural support for improving the connection between business flexibility and software components. We provide initial results regarding software architectural mechanisms which can support context descriptions as well as the context description’s support for business-driven software architecture, and the business flexibility demanded by the business ecosystems.

  • 532.
    Silvander, Johan
    et al.
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för programvaruteknik.
    Wilson, Magnus
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för programvaruteknik.
    Wnuk, Krzysztof
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för programvaruteknik.
    Svahnberg, Mikael
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för programvaruteknik.
    Supporting Continuous Changes to Business Intents2017Inngår i: International journal of software engineering and knowledge engineering, ISSN 0218-1940, Vol. 27, nr 8, s. 1167-1198Artikkel i tidsskrift (Fagfellevurdert)
    Abstract [en]

    Context: Software supporting an enterprise’s business, also known as a business support system, needs to support the correlation of activities between actors as well as influence the activities based on knowledge about the value networks in which the enterprise acts. This requires the use of policies and rules to guide or enforce the execution of strategies or tactics within an enterprise as well as in collaborations between enterprises. With the help of policies and rules, an enterprise is able to capture an actor’s intent in its business support system, and act according to this intent on behalf of the actor. Since the value networks an enterprise is part of will change over time the business intents’ life cycle states might change. Achieving the changes in an effective and efficient way requires knowledge about the affected intents and the correlation between intents.

    Objective: The aim of the study is to identify how a business support system can support continuous changes to business intents. The first step is to find a theoretical model which serves as a foundation for intent-driven systems.

    Method: We conducted a case study using a focus group approach with employees from Ericsson. This case study was influenced by the spiral case study process.

    Results: The study resulted in a model supporting continuous definition and execution of an enterprise. The model is divided into three layers; Define, Execute, and a com- mon governance view layer. This makes it possible to support continuous definition and execution of business intents and to identify the actors needed to support the business intents’ life cycles. This model is supported by a meta-model for capturing information into viewpoints.

    Conclusion: The research question is addressed by suggesting a solution supporting con- tinuous definition and execution of an enterprise as a model of value architecture compo- nents and business functions. The results will affect how Ericsson will build the business studio for their next generation business support systems.

  • 533.
    Silvander, Johan
    et al.
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för programvaruteknik.
    Wälitalo, Lisa
    Blekinge Tekniska Högskola, Fakulteten för teknikvetenskaper, Institutionen för strategisk hållbar utveckling.
    Knowledge creation through a teaching and learning spiral2016Konferansepaper (Fagfellevurdert)
    Abstract [en]

    Context: We have experienced the use of a domain specific language sometimes makes it difficult to present domain knowledge to a group or an individual that has limited or different knowledge about the specific domain, and where the presenter and the audience do not have sufficient insight into each other's contexts. In order to create an environment w here knowledge transfer can exists it is vital to understand how the roles are shifting during the interaction between the participants. In an educational environment Teaching and Learning Activities (TLA) could, in ideal situations, be invented during the design of the curriculum. This might not be the case when interacting with practitioners or students from diverse fields. This situation requires a method to find TLAs for the specific situation. For the domain knowledge to be useful for learners it has to be connected to the context/domain where the learners are active. In this paper we combine a spiral learning process with constructive alignment, which resulted in a teaching and learning spiral process. The outcome of the teach - ing and learning spiral process is to provide the knowledge of using the introduced domain knowledge in a context/domain where the learners are active.

    Objective: The aim with this work is to present guidelines that will contribute to a more effective knowledge creation process in heterogeneous groups, both in an educational environment and in interaction with different groups of practitioners in society.

    Method: We conducted a case study using observations and surveys.

    Results: The results from our case study support a positive effect on the learning outcomes when adopting this methodology. The learning outcome is to gain deeper understanding of the introduced domain knowledge and being able to dis - cuss how the new domain knowledge can be integrated to the learners own context.

    Conclusions: We have formulated guidelines for how to use the teaching and learning spiral process in an effective and efficient way.

  • 534. Solinski, Adam
    et al.
    Petersen, Kai
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för programvaruteknik.
    Prioritizing agile benefits and limitations in relation to practice usage2016Inngår i: Software quality journal, ISSN 0963-9314, E-ISSN 1573-1367, Vol. 24, nr 2, s. 447-482Artikkel i tidsskrift (Fagfellevurdert)
    Abstract [en]

    In recent years, there has been significant shift from rigid development (RD) toward agile. However, it has also been spotted that agile methodologies are hardly ever followed in their pure form. Hybrid processes as combinations of RD and agile practices emerge. In addition, agile adoption has been reported to result in both benefits and limitations. This exploratory study (a) identifies development models based on RD and agile practice usage by practitioners; (b) identifies agile practice adoption scenarios based on eliciting practice usage over time; (c) prioritizes agile benefits and limitations in relation to (a) and (b). Practitioners provided answers through a questionnaire. The development models are determined using hierarchical cluster analysis. The use of practices over time is captured through an interactive board with practices and time indication sliders. This study uses the extended hierarchical voting analysis framework to investigate benefit and limitation prioritization. Four types of development models and six adoption scenarios have been identified. Overall, 45 practitioners participated in the prioritization study. A common benefit among all models and adoption patterns is knowledge and learning, while high requirements on professional skills were perceived as the main limitation. Furthermore, significant variances in terms of benefits and limitations have been observed between models and adoption patterns. The most significant internal benefit categories from adopting agile are knowledge and learning, employee satisfaction, social skill development, and feedback and confidence. Professional skill-specific demands, scalability, and lack of suitability for specific product domains are the main limitations of agile practice usage. Having a balanced agile process allows to achieve a high number of benefits. With respect to adoption, a big bang transition from RD to agile leads to poor quality in comparison with the alternatives.

    Fulltekst (pdf)
    fulltext
  • 535.
    Somaraju, Dilip
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för programvaruteknik.
    Prediction of Time, Cost and Effort needed for software organizations to transit from ISO 9001:2008 to ISO 9001:2015.: A Survey2016Independent thesis Advanced level (degree of Master (Two Years)), 20 poäng / 30 hpOppgave
    Abstract [en]

    Context. Several quality standards have been developed over the years in order to define quality metrics for an organization’s product and even processes. One of the famous standards among them is the ISO 9000 standards which started several years ago. Since its beginning, ISO standards have seen several upgrades. Currently ISO 9001:2008 is in use which is being upgraded to ISO 9001: 2015. Companies have to migrate to the new scheme within three years of the prescribed time in order to retain certification to the ISO 9001 standards. The present thesis is targeted at finding the expected changes and the work improvements in the context of software engineering.

    Objectives. The main aim of the study is to find the expected changes and work improvements needed to migrate to the new version. This is done by fulfilling the following objectives, they are: analyze the expected changes and motivations for the changes in the new ISO 9001 version. Understand the required work and improvements needed for a software organization to successfully upgrade their certification to the new ISO 9001:2015 version. Predict the estimated cost/time /effort that could be incurred for organization to get certified to the forthcoming ISO version.

    Methods. In order to meet the objectives, a literature review was done and the changes incorporated in the new scheme are identified. A survey was conducted in order to predict the impact of cost, time and effort on the new changes in moving to ISO 9001:2008 to ISO 9001:2015. The survey was sent only to software organizations as the context of this study is only restricted to quality in software engineering. The collected data was analyzed using bi-variate analysis and Friedman test on SPSS tool.

    Results. From the literature review, the changes brought about in the new scheme were identified. These changes made were used in the survey questionnaire designed. The survey questionnaire was designed to investigate the expectations of the organizations on the time taken, cost incurred and the effort needed to implement these changes. A total of 63 responses were recorded from the survey.

    Conclusions. From the analysis it was found that several key changes were identified in the new scheme when compared to the old one. From the survey responses, the cost needed for implementing the changes is expected to be moderate, the time needed is predicted as less than 1 year and the effort needed for implementing the changes was estimated to be more. Along with this, the document also holds clear results about clause by clause expected time, cost and effort estimates and the reasons for these assumptions.

    Fulltekst (pdf)
    fulltext
  • 536.
    Spandel, Daniel
    et al.
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för programvaruteknik.
    Kjellgren, Johannes
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för programvaruteknik.
    Choosing between Git and Subversion: How does the choice affect software developers?2014Independent thesis Basic level (degree of Bachelor)Oppgave
    Abstract [en]

    Today a lot of software projects are using version control systems for maintaining their software source code. There are a lot of version control systems, and the choice of which one to choose is far from simple. Today the two biggest version control systems are Git and Subversion. In this paper we have found the main differences between the two, and investigated how the choice between them affects software developers. Although software developers in many aspects are unaffected by the choice, we did find some interesting findings. When using Git, our empirical study shows that software developers seem to check in their code to the main repository more frequently than they do when using Subversion. We also found indications that software developers tend to use Subversion with a graphical interface, whereas the preferred interface for working with Git seems to be command-line. We were also surprised of how insignificant the learning aspect of the systems seems to be for the developers. Our goal with this paper is to provide a foundation to stand upon when choosing what version control system to use for a software project.

    Fulltekst (pdf)
    FULLTEXT01
  • 537.
    Stade, Melanie
    et al.
    University of Applied Sciences and Arts Northwestern Switzerland, CHE.
    Fotrousi, Farnaz
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för programvaruteknik.
    Seyff, Norbert
    University of Applied Sciences and Arts Northwestern Switzerland, CHE.
    Albrecht, Oliver
    SEnerCon GmbH, DEU.
    Feedback Gathering from an Industrial Point of View2017Inngår i: Proceedings - 2017 IEEE 25th International Requirements Engineering Conference, RE 2017, Institute of Electrical and Electronics Engineers Inc. , 2017, s. 71-79Konferansepaper (Fagfellevurdert)
    Abstract [en]

    Feedback communication channels allow end-users to express their needs, which can be considered in software development and evolution. Although feedback gathering and analysis have been identified as an important topic and several researchers have started their investigation, information is scarce on how software companies currently elicit end-user feedback. In this study, we explore the experiences of software companies with respect to feedback gathering. The results of a case study and online survey indicate two sides of the same coin: On the one hand, most software companies are aware of the relevance of end-user feedback for software evolution and provide feedback channels, which allow end-users to communicate their needs and problems. On the other hand, the quantity and quality of the feedback received varies. We conclude that software companies still do not fully exploit the potential of end-user feedback for software development and evolution. © 2017 IEEE.

  • 538.
    Stade, Melanie
    et al.
    University of Applied Sciences and Arts Northwestern Switzerland, CHE.
    Oriol, Marc
    Universitat Politecnica de Catalunya, ESP.
    Cabrera, Oscar
    Universitat Politecnica de Catalunya, ESP.
    Fotrousi, Farnaz
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för programvaruteknik.
    Schaniel, Ronnie
    University of Applied Sciences and Arts Northwestern Switzerland, CHE.
    Seyff, Norberg
    University of Applied Sciences and Arts Northwestern Switzerland, CHE.
    Schmidt, Oleg
    SEnerCon GmbH, DEU.
    Providing a user forum is not enough: First experiences of a software company with CrowdRE2017Inngår i: Proceedings - 2017 IEEE 25th International Requirements Engineering Conference Workshops, REW 2017, Institute of Electrical and Electronics Engineers Inc. , 2017, s. 164-169Konferansepaper (Fagfellevurdert)
    Abstract [en]

    Crowd-based requirements engineering (CrowdRE) is promising to derive requirements by gathering and analyzing information from the crowd. Setting up CrowdRE in practice seems challenging, although first solutions to support CrowdRE exist. In this paper, we report on a German software company's experience on crowd involvement by using feedback communication channels and a monitoring solution for user-event data. In our case study, we identified several problem areas that a software company is confronted with to setup an environment for gathering requirements from the crowd. We conclude that a CrowdRE process cannot be implemented ad-hoc and that future work is needed to create and analyze a continuous feedback and monitoring data stream. © 2017 IEEE.

  • 539.
    Starefors, Henrik
    et al.
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för programvaruteknik.
    Persson, Rasmus
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för programvaruteknik.
    MLID: A multilabelextension of the ID3 algorithm2016Independent thesis Basic level (degree of Bachelor), 10 poäng / 15 hpOppgave
    Abstract [en]

    AbstractMachine learning is a subfield within artificial intelligence that revolves around constructingalgorithms that can learn from, and make predictions on data. Instead of following strict andstatic instruction, the system operates by adapting and learning from input data in order tomake predictions and decisions. This work will focus on a subcategory of machine learningcalled “MultilabelClassification”, which is the concept of where items introduced to thesystem is categorized by an analytical model, learned through supervised learning, whereeach instance of the dataset can belong to multiple labels, or classes.This paper presents the task of implementing a multilabelclassifier based on the ID3algorithm, which we call MLID (MultilabelIterative Dichotomiser). The solution is presentedboth in a sequentially executed version as well as an parallelized one.We also presents acomparison based on accuracy and execution time, that is performed against algorithms of asimilar nature in order to evaluate the viability of using ID3 as a base to further expand andbuild upon in regards of multi label classification.In order to evaluate the performance of the MLID algorithm, we have measured theexecution time, accuracy, and made a summarization of precision and recall into what iscalled Fmeasure,which is the harmonic mean of both precision and sensitivity of thealgorithm. These results are then compared to already defined and established algorithms,on a range of datasets of varying sizes, in order to assess the viability of the MLID algorithm.The results produced when comparing MLID against other multilabelalgorithms such asBinary relevance, Classifier Chains and Random Trees shows that MLID can compete withother classifiers in term of accuracy and Fmeasure,but in terms of training the algorithm,the time required is proven inferior. Through these results, we can conclude that MLID is aviable option to use as a multilabelclassifier. Although, some constraints inherited from theoriginal ID3 algorithm does impede the full utility of the algorithm, we are certain thatfollowing the same path of development and improvement as ID3 experienced would allowMLID to develop towards a suitable choice of algorithm for a diverse range of multilabelclassification problems.

    Fulltekst (pdf)
    BTH2016Starefors
  • 540.
    Strandberg, Jane
    et al.
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för programvaruteknik.
    Lyckne, Mattias
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för programvaruteknik.
    Webbsäkerhet och vanliga brister: kunskapsläget bland utvecklare2014Independent thesis Basic level (degree of Bachelor)Oppgave
    Abstract [sv]

    This bachelor thesis looks at developers knowledge about web security both regarding their own view on their knowledge and their actual knowledge about vulnerabilities and how you mitigate against them. Web developers knowledge regarding web security are becoming more and more important as more applications and services moves to the web and more and more items become connected to the internet. We are doing this by conducting a survey among developers that are currently studying in the field or are working in the field to get a grip on how the knowledge is regarding the most common security concepts. What we saw was that the result varies between the different concepts and many lack much of the knowledge in web security that is getting increasingly more important to have.

    Fulltekst (pdf)
    FULLTEXT01
  • 541.
    Sulaman, Sardar Muhammad
    et al.
    Lund University, SWE.
    Beer, Armin
    Beer Test Consulting, AUT.
    Felderer, Michael
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för programvaruteknik.
    Höst, Martin
    Lund University, SWE.
    Comparison of the FMEA and STPA safety analysis methods: a case study2019Inngår i: Software quality journal, ISSN 0963-9314, E-ISSN 1573-1367, Vol. 27, nr 1, s. 349-387Artikkel i tidsskrift (Fagfellevurdert)
    Abstract [en]

    As our society becomes more and more dependent on IT systems, failures of these systems can harm more and more people and organizations. Diligently performing risk and hazard analysis helps to minimize the potential harm of IT system failures on the society and increases the probability of their undisturbed operation. Risk and hazard analysis is an important activity for the development and operation of critical software intensive systems, but the increased complexity and size puts additional requirements on the effectiveness of risk and hazard analysis methods. This paper presents a qualitative comparison of two hazard analysis methods, failure mode and effect analysis (FMEA) and system theoretic process analysis (STPA), using case study research methodology. Both methods have been applied on the same forward collision avoidance system to compare the effectiveness of the methods and to investigate what are the main differences between them. Furthermore, this study also evaluates the analysis process of both methods by using a qualitative criteria derived from the technology acceptance model (TAM). The results of the FMEA analysis were compared to the results of the STPA analysis, which were presented in a previous study. Both analyses were conducted on the same forward collision avoidance system. The comparison shows that FMEA and STPA deliver similar analysis results.

    Fulltekst (pdf)
    fulltext
  • 542.
    Sun, Tao
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för programvaruteknik.
    Product Context Analysis with Twitter Data2016Independent thesis Advanced level (degree of Master (Two Years)), 20 poäng / 30 hpOppgave
    Abstract [en]

    Context. For the product manager, the product context analysis, which aims to align their products to the market needs, is very important. By understanding the market needs, the product manager knows the product context information about the environment the products conceived and the business the products take place. The product context analysis using the product context information helps the product manager find the accurate position of his/her products and support the decision-making of the products. The product context information generally could be found in the user feedbacks. And the traditional techniques of acquiring the user feedbacks can be replaced by collecting the existed online user feedbacks with a cheaper cost. Therefore, researchers did studies on the online user feedbacks and the results showed those user feedbacks contain the product context information. Therefore, in this study, I tried to elicit the product context information from the user feedbacks posted on Twitter.

    Objectives. Objectives of this study are 1. I investigated what kinds of Apps can be used to collect   more related Tweets, and 2. I investigated what kinds of product context information can be elicited from the collected Tweets.

    Methods. To achieve the first objective, I designed unified criteria for selecting Apps and collecting App-related Tweets, and then conduct the statistical analysis to find out what is/are the factor(s) affect (s) the Tweets collection. To achieve the second objective, I conducted the directed content analysis on the collected Tweets with an indicator for identifying the product context information, and then make a descriptive statistical analysis of the elicited product context information.

    Results. I found the top-ranked Apps or Apps in few themes like “Health and Fitness” and “Games” have more and fresher App-related Tweets. And from my collected Tweets, I can elicit at least 15 types of product context information, the types include “user experience”, “use case”, “partner”, “competitor”, “platforms” and so on.

    Conclusions. This is an exploratory study of eliciting product context information from the Tweets. It presented the method of collecting the App-related Tweets and eliciting product context information from the collected Tweets. It showed what kinds of App are suitable to do so and what types of product context information can be elicited from the Tweets. This study let us be aware of that the Tweets can be used for the product context analysis, and let us know the appropriate condition to use the Tweets for the product context analysis.

    Fulltekst (pdf)
    fulltext
  • 543.
    Sundelin, Anders
    et al.
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för programvaruteknik.
    Gonzalez-Huerta, Javier
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för programvaruteknik.
    Wnuk, Krzysztof
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för programvaruteknik.
    Test-Driving FinTech Product Development: An Experience Report2018Inngår i: Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) / [ed] Ciolkowski M.,Hebig R.,Kuhrmann M.,Pfahl D.,Tell P.,Amasaki S.,Kupper S.,Schneider K.,Klunder J., Springer, 2018, Vol. 112171, s. 219-226Konferansepaper (Fagfellevurdert)
    Abstract [en]

    In this paper, we present experiences from eight years of developing a financial transaction engine, using what can be described as an integration-test-centric software development process.We discuss the product and the relation between three different categories of its software and how the relative weight of these artifacts has varied over the years.In addition to the presentation, some challenges and future research directions are discussed.

    Fulltekst (pdf)
    fulltext
  • 544.
    Svahnberg, Mikael
    et al.
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för programvaruteknik.
    Gorschek, Tony
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för programvaruteknik.
    A model for assessing and re-assessing the value of software reuse2017Inngår i: Journal of Software: Evolution and Process, ISSN 2047-7473, E-ISSN 2047-7481, Vol. 29, nr 4Artikkel i tidsskrift (Fagfellevurdert)
    Abstract [en]

    Background: Software reuse is often seen as a cost avoidance rather than a gained value. This results in a rather one-sided debate where issues such a resource control, release schedule, quality, or reuse in more than one release are neglected. Aims: We propose a reuse value assessment framework, intended to provide a more nuanced view of the value and costs associated with different reuse candidates. Method: This framework is constructed based on findings from an interview study at a large software development company. Results: The framework considers the functionality, compliance to standards, provided quality, and provided support of a reuse candidate, thus enabling an informed comparison between different reuse candidates. Furthermore, the framework provides means for tracking the value of the reused asset throughout subsequent releases. Conclusions: The reuse value assessment framework is a tool to assist in the selection between different reuse candidates. The framework also provides a means to assess the current value of a reusable asset in a product, which can be used to indicate where maintenance efforts would increase the utilized potential of the reusable asset.

  • 545.
    Svedklint, Mattias
    et al.
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för programvaruteknik.
    Bellstrand, Magnus
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för programvaruteknik.
    Prestanda och webbramverk2014Independent thesis Basic level (degree of Bachelor)Oppgave
    Abstract [sv]

    I denna studie undersöktes det tio vanliga ramverk inom webb branschen, både de mest använda ramverken samt några nya uppstickare som har växt mycket de senaste åren. För att skala upp en hemsida till många användare är det viktigt att strukturen bakom sidan presterar bra, därför är det viktigt att välja rätt ramverk. Så hur ska en webbutvecklare då välja ramverk för att kunna uppnå en bra prestanda? Det är allmänt känt att användare lämnar sidor när responstiden ökar. Prestandan försämras snabbt när dynamiskt innehåll hanteras, vilket medför ökade hårdvarukostnader för att kunna hantera prestanda problemen. För att lösa detta så bidrar denna undersökning med riktlinjer för valet av rätt ramverk. Genom att prestandatester utfördes på tio utvalda ramverk, och där efter listades de snabbaste ramverken blev det ett resultat som visar på det ramverk som presterar bäst. Det utfördes även en observation av installationens utförande för att få reda på problematik som kan uppstå när respektive ramverk installeras. Det noterades även hur bra respektive ramverks manual hjälpte till för att guida installationen och att lösa problem som uppstod under installationen och konfigurationen av ramverken.

    Fulltekst (pdf)
    FULLTEXT01
  • 546.
    Svensgård, Simon
    et al.
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för programvaruteknik.
    Henriksson, Johannes
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för programvaruteknik.
    Mocking SaaS Cloud for Testing2017Independent thesis Advanced level (degree of Master (Two Years)), 20 poäng / 30 hpOppgave
    Abstract [en]

    In this paper we evaluate how software testing is affected by the usage of a mock-object, a dummy implementation of a real object, in place of having data in a cloud that is accessed through an API. We define the problems for testing that having data in the cloud brings, which of these problems a mock-object can remedy and what problems there are with testing using the mock-object. We also evaluate if testing using the mock-object can find the same faults as testing against the cloud and if the same code can be covered by the tests. This is done at Blekinge Institute of Technology(BTH) by creating an integration system for the company Cybercom Sweden and Karlskrona Municipality. This integration system is made in C# and works by syncing schedules from Novaschem to a cloud service, Google Calendar. With this paper we show that a mock-object in place of a cloud is very useful for testing when it comes to clean-up, triggering certain states and to avoid query limitations.

    Fulltekst (pdf)
    fulltext
  • 547.
    Svensson Sand, Kim
    et al.
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för programvaruteknik.
    Eliasson, Tord
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för programvaruteknik.
    A comparison of functional and object-orientedprogramming paradigms in JavaScript2017Independent thesis Basic level (degree of Bachelor), 10 poäng / 15 hpOppgave
    Abstract [en]

    There are multiple programming paradigms that have their own set rules forhow code should be written. Programming languages utilize one or multiple ofthese paradigms. In this thesis, we will compare object-oriented programming,that is the most used today with languages such as C++ and Java, and functionalprogramming. Functional programming was introduced in the 1950's butsuered from performance issues, and has not been used much except for in theacademic world. However, for its ability to handle concurrency and big data,functional programming is of interest in the industry again with languages suchas Scala. In functional programming side effects, any interaction outside of thefunction, are avoided as well as changing and saving state.

    To compare these paradigms we have chosen four dierent algorithms, whichboth of us have implemented twice, once according to object-oriented programmingand once according to functional programming. All algorithms were implementedJavaScript. JavaScript is a multiparadigm language that supportsboth functional and object-oriented programming. For all implementations,we have measured development time, lines of code, execution time and memoryusage. Our results show that object-oriented programming gave us betterperformance, but functional programming resulted in less code and a shorterdevelopment time.

    Fulltekst (pdf)
    fulltext
  • 548.
    Swahn, Henrik
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för programvaruteknik.
    Pthreads and OpenMP: A  performance and productivity study2016Independent thesis Basic level (degree of Bachelor), 10 poäng / 15 hpOppgave
    Abstract [en]

    Today most computer have a multicore processor and are depending on parallel execution to be able to keep up with the demanding tasks that exist today, that forces developers to write software that can take advantage of multicore systems. There are multiple programming languages and frameworks that makes it possible to execute the code in parallel on different threads, this study looks at the performance and effort required to work with two of the frameworks that are available to the C programming language, POSIX Threads(Pthreads) and OpenMP. The performance is measured by paralleling three algorithms, Matrix multiplication, Quick Sort and calculation of the Mandelbrot set using both Pthreads and OpenMP, and comparing first against a sequential version and then the parallel version against each other. The effort required to modify the sequential program using OpenMP and Pthreads is measured in number of lines the final source code has. The results shows that OpenMP does perform better than Pthreads in Matrix Multiplication and Mandelbrot set calculation but not on Quick Sort because OpenMP has problem with recursion and Pthreads does not. OpenMP wins the effort required on all the tests but because there is a large performance difference between OpenMP and Pthreads on Quick Sort OpenMP cannot be recommended for paralleling Quick Sort or other recursive programs. 

    Fulltekst (pdf)
    BTH2016Swahn
  • 549.
    Tanveer, Binish
    et al.
    Fraunhofer Institute for Experimental Software Engineering IESE,, DEU.
    Vollmer, Anna Maria
    Fraunhofer Institute for Experimental Software Engineering IESE,, DEU.
    Braun, Stefan
    Insiders Technologies GmBH, DEU.
    Ali, Nauman bin
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för programvaruteknik.
    An evaluation of effort estimation supported by change impact analysis in agile software development2019Inngår i: Journal of Software: Evolution and Process, ISSN 2047-7473, E-ISSN 2047-7481, Vol. 31, nr 5, artikkel-id e2165Artikkel i tidsskrift (Fagfellevurdert)
    Abstract [en]

    In agile software development, functionality is added to the system in an incremental and iterative manner. Practitioners often rely on expert judgment to estimate the effort in this context. However, the impact of a change on the existing system can provide objective information to practitioners to arrive at an informed estimate. In this regard, we have developed a hybrid method, that utilizes change impact analysis information for improving effort estimation. We also developed an estimation model based on gradient boosted trees (GBT). In this study, we evaluate the performance and usefulness of our hybrid method with tool support and the GBT model in a live iteration at Insiders Technologies GmbH, a German software company. Additionally, the solution was also assessed for perceived usefulness and understandability in a study with graduate and post-graduate students. The results from the industrial evaluation show that the proposed method produces more accurate estimates than only expert-based or only model-based estimates. Furthermore, both students and practitioners perceived the usefulness and understandability of the method positively.

  • 550.
    Tempero, Ewan
    et al.
    Univ Auckland, NZL.
    Gorschek, Tony
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för programvaruteknik.
    Angelis, Lefteris
    Aristotle Univ Thessaloniki, GRC.
    Barriers to Refactoring2017Inngår i: Communications of the ACM, ISSN 0001-0782, E-ISSN 1557-7317, Vol. 60, nr 10, s. 54-61Artikkel i tidsskrift (Fagfellevurdert)
    Abstract [en]

    REFACTORING(6) IS SOMETHING software developers like to do. They refactor a lot. But do they refactor as much as they would like? Are there barriers that prevent them from doing so? Refactoring is an important tool for improving quality. Many development methodologies rely on refactoring, especially for agile methodologies but also in more plan-driven organizations. If barriers exist, they would undermine the effectiveness of many product-development organizations. We conducted a large-scale survey in 2009 of 3,785 practitioners' use of object-oriented concepts, 7 including questions as to whether they would refactor to deal with certain design problems. We expected either that practitioners would tell us our choice of design principles was inappropriate for basing a refactoring decision or that refactoring is the right decision to take when designs were believed to have quality problems. However, we were told the decision of whether or not to refactor was due to non-design considerations. It is now eight years since the survey, but little has changed in integrated development environment (IDE) support for refactoring, and what has changed has done little to address the barriers we identified.

891011121314 501 - 550 of 664
RefereraExporteraLink til resultatlisten
Permanent link
Referera
Referensformat
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Annet format
Fler format
Språk
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Annet språk
Fler språk
Utmatningsformat
  • html
  • text
  • asciidoc
  • rtf