Change search
Refine search result
1234567 101 - 150 of 1676
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Rows per page
  • 5
  • 10
  • 20
  • 50
  • 100
  • 250
Sort
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
Select
The maximal number of hits you can export is 250. When you want to export more records please use the Create feeds function.
  • 101.
    Auer, Florian
    et al.
    University of Innsbruck, DEU.
    Felderer, Michael
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Lenarduzzi, Valentina
    Tampere University of Technology, FIN.
    Towards defining a microservice migration framework2018In: ACM International Conference Proceeding Series, Association for Computing Machinery , 2018, Vol. Part F147763Conference paper (Refereed)
    Abstract [en]

    Microservices are more and more popular. As a result, some companies started to believe that microservices are the solution to all of their problems and rush to adopt microservices without sufficient knowledge about the impacts. Most of the time they expect to decrease their maintenance effort or to ease the deployment process. However, re-architecting a system to microservices is not always beneficial. In this work we propose a work-plan to identify a decision framework that supports practitioners in the understanding of possible migration based benefits and issues. This will lead to more reasoned decisions and mitigate the risk of migration. © 2018 Copyright held by the owner/author(s).

  • 102.
    Augustsson, Christopher
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Dynamic vs Static user-interface: Which one is easier to learn? And will it make you more efficient?2019Independent thesis Basic level (university diploma), 10 credits / 15 HE creditsStudent thesis
    Abstract [en]

    Excel offers great flexibility and allows non-programmers to create complex functionality – but at the same time, it can become very nested with cells pointing to other cells, especially if there have been many changes over a more extended period. This has happened to ICS – a small company who has its focus on calibration, out of an array of different things relating to material testing. The system they have for field calibrations today have been overly complicated and hard to maintain and consists of multiple Excel spreadsheets. The conclusion has been that a new system needs to be developed – but question how, remains. By creating a prototype using modern web-technologies, this study has evaluated if a web application can meet the specific functional requirements ICS have and if it is a suitable solution for a new system. The prototype was put under manual user test, and the results find that the prototype meets all the requirements, meaning that a webapplication could work as a replacement. During the user tests, this study has also evaluated the differences in learnability and efficiency of users, between the static user interface of the current Excel-based system and the dynamic user interface of the web-based prototype. The users have performed a calibration with both systems, and parameters such as time to completion or number of errors made have been recorded. By comparing the test results from both systems, this study has concluded that a dynamic user interface is more likely to improve learnability for novice users, but have a low impact on efficiency for expert users.

  • 103.
    Avdic, Adnan
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Ekholm, Albin
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Anomaly Detection in an e-Transaction System using Data Driven Machine Learning Models: An unsupervised learning approach in time-series data2019Independent thesis Basic level (degree of Bachelor), 10 credits / 15 HE creditsStudent thesis
    Abstract [en]

    Background: Detecting anomalies in time-series data is a task that can be done with the help of data driven machine learning models. This thesis will investigate if, and how well, different machine learning models, with an unsupervised approach,can detect anomalies in the e-Transaction system Ericsson Wallet Platform. The anomalies in our domain context is delays on the system.

    Objectives: The objectives of this thesis work is to compare four different machine learning models ,in order to find the most relevant model. The best performing models are decided by the evaluation metric F1-score. An intersection of the best models are also being evaluated in order to decrease the number of False positives in order to make the model more precise.

    Methods: Investigating a relevant time-series data sample with 10-minutes interval data points from the Ericsson Wallet Platform was used. A number of steps were taken such as, handling data, pre-processing, normalization, training and evaluation.Two relevant features was trained separately as one-dimensional data sets. The two features that are relevant when finding delays in the system which was used in this thesis is the Mean wait (ms) and the feature Mean * N were the N is equal to the Number of calls to the system. The evaluation metrics that was used are True positives, True Negatives, False positives, False Negatives, Accuracy, Precision, Recall, F1-score and Jaccard index. The Jaccard index is a metric which will reveal how similar each algorithm are at their detection. Since the detection are binary, it’s classifying the each data point in the time-series data.

    Results: The results reveals the two best performing models regards to the F1-score.The intersection evaluation reveals if and how well a combination of the two best performing models can reduce the number of False positives.

    Conclusions: The conclusion to this work is that some algorithms perform better than others. It is a proof of concept that such classification algorithms can separate normal from non-normal behavior in the domain of the Ericsson Wallet Platform.

  • 104. Avritzer, Alberto
    et al.
    Beecham, Sarah
    Britto, Ricardo
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Kroll, Josiane
    Menaché, Daniel
    Noll, John
    Paasivaara, Maria
    Extending Survivability Models for Global Software Development with Media Synchronicity Theory2015In: Proceeding of the IEEE 10th International Conference on Global Software Engineering, IEEE Communications Society, 2015, p. 23-32Conference paper (Refereed)
    Abstract [en]

    In this paper we propose a new framework to assess survivability of software projects accounting for media capability details as introduced in Media Synchronicity Theory (MST). Specifically, we add to our global engineering framework the assessment of the impact of inadequate conveyance and convergence available in the communication infrastructure selected to be used by the project, on the system ability to recover from project disasters. We propose an analytical model to assess how the project recovers from project disasters related to process and communication failures. Our model is based on media synchronicity theory to account for how information exchange impacts recovery. Then, using the proposed model we evaluate how different interventions impact communication effectiveness. Finally, we parameterize and instantiate the proposed survivability model based on a data gathering campaign comprising thirty surveys collected from senior global software development experts at ICGSE'2014 and GSD'2015.

  • 105.
    Avutu, Neeraj
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Performance Evaluation of MongoDB on Amazon Web Service and OpenStack2018Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Context

    MongoDB is an open-source, scalable, NoSQL database that distributes the data over many commodity servers. It provides no single point of failure by copying and storing the data in different locations. MongoDB uses a master-slave design rather than the ring topology used by Cassandra. Virtualization is the technique used for accessing multiple machines in a single host and utilizing the various virtual machines. It is the fundamental technology, which allows cloud computing to provide resource sharing among the users.

    Objectives

    Studying and identifying MongoDB, Virtualization on AWS and OpenStack. Experiments were conducted to identify the CPU utilization associated when Mongo DB instances are deployed on AWS and physical server arrangement. Understanding the effect of Replication in the Mongo DB instances and its effect on MongoDB concerning throughput, CPU utilization and latency.

    Methods

    Initially, a literature review is conducted to design the experiment with the mentioned problems. A three node MongoDB cluster runs on Amazon EC2 and OpenStack Nova with Ubuntu 16.04 LTS as an operating system. Latency, throughput and CPU utilization were measured using this setup. This procedure was repeated for five nodes MongoDB cluster and three nodes production cluster with six types of workloads of YCSB.

    Results

    Virtualization overhead has been identified in terms of CPU utilization and the effects of virtualization on MongoDB are found out in terms of CPU utilization, latency and throughput.

    Conclusions

    It is concluded that there is a decrease in latency and increases throughput with the increase in nodes. Due to replication, increase in latency was observed.

  • 106.
    Axelsson, Arvid
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Light Field Coding Using Panoramic Projection2014Student thesis
    Abstract [en]

    A new generation of 3d displays provides depth perception without the need for glasses and allows the viewer to see content from many different directions. Providing video for these displays requires capturing the scene by several cameras at different viewpoints, the data from which together forms light field video. To encode such video with existing video coding requires a large amount of data and it increases quickly with a higher number of views, which this application needs. One such coding is the multiview extension of High Efficiency Video Coding (mv-hevc), which encodes a number of similar video streams as different layers. A new coding scheme for light field video, called Panoramic Light Field (plf), is implemented and evaluated in this thesis. The main idea behind the coding is to project all points in a scene that are visible from any of the viewpoints to a single, global view, similar to how texture mapping maps a texture onto a 3d model in computer graphics. Whereas objects ordinarily shift position in the frame as the camera position changes, this is not the case when using this projection. A visible point in space is projected to the same image pixel regardless of viewpoint, resulting in large similarities between images from different viewpoints. The similarity between the layers in light field video helps to achieve more efficient compression when the projection is combined with existing multiview coding. In order to evaluate the scheme, 3d content was created and software was developed to encode it using plf. Video using this coding is compared to existing technology: a straightforward encoding of the views using mvhevc. The results show that the plf coding performs better on the sample content at lower quality levels, while it is worse at higher bitrate due to quality loss from the projection procedure. It is concluded that plf is a promising technology and suggestions are given for future research that may improve its performance further.

  • 107.
    Axelsson, Erika
    Blekinge Institute of Technology, Faculty of Computing, Department of Technology and Aesthetics.
    Hjältarna vi ser: Hur barn blir påverkade av media2018Independent thesis Basic level (degree of Bachelor), 20 credits / 30 HE creditsStudent thesis
    Abstract [sv]

    Temat för detta arbete är att undersöka hur kvinnor framställs som superhjältar, om det finns några skillnader mellan de kvinnliga och manliga hjältarna. I detta arbete kommer det även att undersökas hur barn blir påverkade av media och hur barn analyserar det de ser och hur de utrycker sig i bild. I metoden används dels en workshop med elever från årskurs tre för att undersöka vilka superhjältar som spontant kom till barnen. Barnen skulle sedan måla sina egna superhjältar som sedan analyserades för att se om det blev någon skillnad på hur killar och tjejer målade sina superhjältar. Barnen målade sina teckningar olika, ett exempel var att kroppsbyggnad var olika på killar och tjejers målningar. Andra delen av metoden är att analysera fyra filmer med hjälp av Bechdeltestet för att undersöka hur och om kvinnornas roll har förändrats med tiden. Det var inte alltid självklart att en film skulle bli godkänd. Det finns idag skillnader på hur män och kvinnor gestaltas i superhjältefilmer. Detta märks dels på deras krafter och dels hur de lär sig hantera dessa. Trots att kvinnorna får mer och mer plats utgör männen största delen i filmerna, då männen oftast har huvudrollen.

  • 108.
    Axelsson, Jonas
    Blekinge Institute of Technology, Faculty of Computing, Department of Creative Technologies.
    Comparison of user accuracy and speed when performing 3D game target practice using a computer monitor and virtual reality headset2017Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Consumer grade Virtual Reality (VR)-headsets are on the rise, and with them comes an increasing number of digital games which support VR. How players perceive the gameplay and how well they perform at the games tasks can be key factors to designing new games.

    This master’s thesis aims to evaluate if a user can performa game task, specifically a target practice, in less time and/or more accurately when using a VR-headset as opposed to a computer screen and mouse. To gather statistics and measure the differences, an experiment was conducted using a test application developed alongside this report. The experiment recorded accuracy scores and time taken in tests performed by 35 test participants using both a VR-headset and computer screen.

    The resulting data sets are presented in the results chapter of this report. A Kolmogorov-Smirnov Normality Test and Student’s paired samples t-test was performed on the data to establish its statistical significance. After analysis, the results are reviewed, discussed and conclusions are made.

    This study concludes that when performing the experiment, the use of a VR-headset decreased the users accuracy and to a lesser extent also increased the time the user took to hit all targets. An argument was made that the longer previous experience with computer screen and mouse of most users gave this method an unfair advantage. With equally long training, VR use might score similar results.

  • 109.
    Ayyagari, Nitin Reddy
    Blekinge Institute of Technology, Faculty of Computing, Department of Communication Systems.
    Databases For Mediation Systems: Design and Data scaling approach2015Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Context: There is continuous growth in data generation due to wide usage of modern communication systems. Systems have to be designed which can handle the processing of these data volumes efficiently. Mediation systems are meant to serve this purpose. Databases form an integral part of the mediation systems. Suitability of the databases for such systems is the principle theme of this work.

    Objectives: The objective of this thesis is to identify the key requirements for databases that can be used as part of Mediation systems, gain a thorough understanding of various features, the data models commonly used in databases and to benchmark their performance.

    Methods: Previous work that has been carried out on various databases is studied as a part of literature review. Test bed is set up as a part of experiment and performance metrics such as throughput and total time taken were measured through a Java based client. Thorough analysis has been carried out by varying various parameters like data volumes, number of threads in the client etc.

    Results: Cassandra has a very good write performance for event and batch operations. Cassandra has a slightly better read performance when compared to MySQL Cluster but this differentiation withers out in case of fewer number of threads in the client.

    Conclusions: On evaluation of MySQL Cluster and Cassandra we conclude that they have several features that are suitable for mediation systems. On the other hand, Cassandra does not guarantee ACID transactions while MySQL Cluster has good support. There is need for further evaluation on new generation databases which are not mature enough as of now.

  • 110. Baca, Dejan
    et al.
    Boldt, Martin
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Carlsson, Bengt
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Jacobsson, Andreas
    A Novel Security-Enhanced Agile Software Development Process Applied in an Industrial Setting2015In: Proceedings 10th International Conference on Availability, Reliability and Security ARES 2015, IEEE Computer Society Digital Library, 2015Conference paper (Refereed)
    Abstract [en]

    A security-enhanced agile software development process, SEAP, is introduced in the development of a mobile money transfer system at Ericsson Corp. A specific characteristic of SEAP is that it includes a security group consisting of four different competences, i.e., security manager, security architect, security master and penetration tester. Another significant feature of SEAP is an integrated risk analysis process. In analyzing risks in the development of the mobile money transfer system, a general finding was that SEAP either solves risks that were previously postponed or solves a larger proportion of the risks in a timely manner. The previous software development process, i.e., the baseline process of the comparison outlined in this paper, required 2.7 employee hours spent for every risk identified in the analysis process compared to, on the average, 1.5 hours for the SEAP. The baseline development process left 50% of the risks unattended in the software version being developed, while SEAP reduced that figure to 22%. Furthermore, SEAP increased the proportion of risks that were corrected from 12.5% to 67.1%, i.e., more than a five times increment. This is important, since an early correction may avoid severe attacks in the future. The security competence in SEAP accounts for 5% of the personnel cost in the mobile money transfer system project. As a comparison, the corresponding figure, i.e., for security, was 1% in the previous development process.

  • 111.
    Bachu, Rajesh
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    A framework to migrate and replicate VMware Virtual Machines to Amazon Elastic Compute Cloud: Performance comparison between on premise and the migrated Virtual Machine2015Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Context Cloud Computing is the new trend in the IT industry. Traditionally obtaining servers was quiet time consuming for companies. The whole process of research on what kind of hardware to buy, get budget approval, purchase the hardware and get access to the servers could take weeks or months. In order to save time and reduce expenses, most companies are moving towards the cloud. One of the known cloud providers is Amazon Elastic Compute Cloud (EC2). Amazon EC2 makes it easy for companies to obtain virtual servers (known as computer instances) in a cloud quickly and inexpensively. Another advantage of using Amazon EC2 is the flexibility that they offer, so the companies can even import/export the Virtual Machines (VM) that they have built which meets the companies IT security, configuration, management and compliance requirements into Amazon EC2.

    Objectives In this thesis, we investigate importing a VM running on VMware into Amazon EC2. In addition, we make a performance comparison between a VM running on VMware and the VM with same image running on Amazon EC2.

    Methods A Case study research has been done to select a persistent method to migrate VMware VMs to Amazon EC2. In addition an experimental research is conducted to measure the performance of Virtual Machine running on VMware and compare it with same Virtual Machine running on EC2. We measure the performance in terms of CPU, memory utilization as well as disk read/write speed using well-known open-source benchmarks from Phoronix Test Suite (PTS).

    Results Investigation on importing VM snapshots (VMDK, VHD and RAW format) to EC2 was done using three methods provided by AWS. Comparison of performance was done by running each benchmark for 25 times on each Virtual Machine.

    Conclusions Importing VM to EC2 was successful only with RAW format and replication was not successful as AWS installs some software and drivers while importing the VM to EC2. Migrated EC2 VM performs better than on premise VMware VM in terms of CPU, memory utilization and disk read/write speed.

  • 112.
    Badampudi, Deepika
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Reporting Ethics Considerations in Software Engineering Publications2017In: 11TH ACM/IEEE INTERNATIONAL SYMPOSIUM ON EMPIRICAL SOFTWARE ENGINEERING AND MEASUREMENT (ESEM 2017), IEEE , 2017, p. 205-210Conference paper (Refereed)
    Abstract [en]

    Ethical guidelines of software engineering journals require authors to provide statements related to the conflict of interest and the process of obtaining consent (if human subjects are involved). The objective of this study is to review the reporting of the ethical considerations in Empirical Software Engineering - An International Journal. The results indicate that two out of seven studies reported some ethical information however, not explicitly. The ethical discussions were focussed on anonymity and confidentiality. Ethical aspects such as competence, comprehensibility and vulnerability of the subjects were not discussed in any of the papers reviewed in this study. It is important to not only state that consent was obtained however, the procedure of obtaining consent should be reported to improve the accountability and trust.

  • 113.
    Badampudi, Deepika
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Towards decision-making to choose among different component origins2016Licentiate thesis, comprehensive summary (Other academic)
    Abstract [en]

    Context: The amount of software in solutions provided in various domains is continuously growing. These solutions are a mix of hardware and software solutions, often referred to as software-intensive systems. Companies seek to improve the software development process to avoid delays or cost overruns related to the software development.  

    Objective: The overall goal of this thesis is to improve the software development/building process to provide timely, high quality and cost efficient solutions. The objective is to select the origin of the components (in-house, outsource, components off-the-shelf (COTS) or open source software (OSS)) that facilitates the improvement. The system can be built of components from one origin or a combination of two or more (or even all) origins. Selecting a proper origin for a component is important to get the most out of a component and to optimize the development. 

    Method: It is necessary to investigate the component origins to make decisions to select among different origins. We conducted a case study to explore the existing challenges in software development.  The next step was to identify factors that influence the choice to select among different component origins through a systematic literature review using a snowballing (SB) strategy and a database (DB) search. Furthermore, a Bayesian synthesis process is proposed to integrate the evidence from literature into practice.  

    Results: The results of this thesis indicate that the context of software-intensive systems such as domain regulations hinder the software development improvement. In addition to in-house development, alternative component origins (outsourcing, COTS, and OSS) are being used for software development. Several factors such as time, cost and license implications influence the selection of component origins. Solutions have been proposed to support the decision-making. However, these solutions consider only a subset of factors identified in the literature.   

    Conclusions: Each component origin has some advantages and disadvantages. Depending on the scenario, one component origin is more suitable than the others. It is important to investigate the different scenarios and suitability of the component origins, which is recognized as future work of this thesis. In addition, the future work is aimed at providing models to support the decision-making process.

  • 114.
    Badampudi, Deepika
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Britto, Ricardo
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Unterkalmsteiner, Michael
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Modern code reviews - Preliminary results of a systematic mapping study2019In: ACM International Conference Proceeding Series, Association for Computing Machinery , 2019, p. 340-345Conference paper (Refereed)
    Abstract [en]

    Reviewing source code is a common practice in a modern and collaborative coding environment. In the past few years, the research on modern code reviews has gained interest among practitioners and researchers. The objective of our investigation is to observe the evolution of research related to modern code reviews, identify research gaps and serve as a basis for future research. We use a systematic mapping approach to identify and classify 177 research papers. As preliminary result of our investigation, we present in this paper a classification scheme of the main contributions of modern code review research between 2005 and 2018. © 2019 Association for Computing Machinery.

  • 115.
    Badampudi, Deepika
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Claes, Wohlin
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Kai, Petersen
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Software Component Decision-making: In-house, OSS, COTS or Outsourcing: A Systematic Literature Review2016In: Journal of Systems and Software, ISSN 0164-1212, E-ISSN 1873-1228, Vol. 121, p. 105-124Article in journal (Refereed)
    Abstract [en]

    Component-based software systems require decisions on component origins for acquiring components. A component origin is an alternative of where to get a component from. Objective: To identify factors that could influence the decision to choose among different component origins and solutions for decision-making (For example, optimization) in the literature. Method: A systematic review study of peer-reviewed literature has been conducted. Results: In total we included 24 primary studies. The component origins compared were mainly focused on in-house vs. COTS and COTS vs. OSS. We identified 11 factors affecting or influencing the decision to select a component origin. When component origins were compared, there was little evidence on the relative (either positive or negative) effect of a component origin on the factor. Most of the solutions were proposed for in-house vs. COTS selection and time, cost and reliability were the most considered factors in the solutions. Optimization models were the most commonly proposed technique used in the solutions. Conclusion: The topic of choosing component origins is a green field for research, and in great need of empirical comparisons between the component origins, as well of how to decide between different combinations of them.

    The full text will be freely available from 2019-11-01 12:16
  • 116.
    Badampudi, Deepika
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Wnuk, Krzysztof
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Wohlin, Claes
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Franke, Ulrik
    Swedish Institute of Computer Science, SWE.
    Šmite, Darja
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Cicchetti, Antonio
    Mälardalens högskola, SWE.
    A decision-making process-line for selection of software asset origins and components2018In: Journal of Systems and Software, ISSN 0164-1212, E-ISSN 1873-1228, Vol. 135, p. 88-104Article in journal (Refereed)
    Abstract [en]

    Selecting sourcing options for software assets and components is an important process that helps companies to gain and keep their competitive advantage. The sourcing options include: in-house, COTS, open source and outsourcing. The objective of this paper is to further refine, extend and validate a solution presented in our previous work. The refinement includes a set of decision-making activities, which are described in the form of a process-line that can be used by decision-makers to build their specific decision-making process. We conducted five case studies in three companies to validate the coverage of the set of decision-making activities. The solution in our previous work was validated in two cases in the first two companies. In the validation, it was observed that no activity in the proposed set was perceived to be missing, although not all activities were conducted and the activities that were conducted were not executed in a specific order. Therefore, the refinement of the solution into a process-line approach increases the flexibility and hence it is better in capturing the differences in the decision-making processes observed in the case studies. The applicability of the process-line was then validated in three case studies in a third company. © 2017 Elsevier Inc.

    The full text will be freely available from 2020-01-01 12:19
  • 117.
    Badampudi, Deepika
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Wohlin, Claes
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Bayesian Synthesis for Knowledge Translation in Software Engineering: Method and Illustration2016In: 2016 42th Euromicro Conference on Software Engineering and Advanced Applications (SEAA), IEEE, 2016Conference paper (Refereed)
    Abstract [en]

    Systematic literature reviews in software engineering are necessary to synthesize evidence from multiple studies to provide knowledge and decision support. However, synthesis methods are underutilized in software engineering research. Moreover, translation of synthesized data (outcomes of a systematic review) to provide recommendations for practitioners is seldom practiced. The objective of this paper is to introduce the use of Bayesian synthesis in software engineering research, in particular to translate research evidence into practice by providing the possibility to combine contextualized expert opinions with research evidence. We adopted the Bayesian synthesis method from health research and customized it to be used in software engineering research. The proposed method is described and illustrated using an example from the literature. Bayesian synthesis provides a systematic approach to incorporate subjective opinions in the synthesis process thereby making the synthesis results more suitable to the context in which they will be applied. Thereby, facilitating the interpretation and translation of knowledge to action/application. None of the synthesis methods used in software engineering allows for the integration of subjective opinions, hence using Bayesian synthesis can add a new dimension to the synthesis process in software engineering research.

  • 118.
    Badampudi, Deepika
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Wohlin, Claes
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Gorschek, Tony
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Contextualizing research evidence through knowledge translation in software engineering2019In: ACM International Conference Proceeding Series, Association for Computing Machinery , 2019, p. 306-311Conference paper (Refereed)
    Abstract [en]

    Usage of software engineering research in industrial practice is a well-known challenge. Synthesis of knowledge from multiple research studies is needed to provide evidence-based decision-support for industry. The objective of this paper is to present a vision of how a knowledge translation framework may look like in software engineering research, in particular how to translate research evidence into practice by combining contextualized expert opinions with research evidence. We adopted the framework of knowledge translation from health care research, adapted and combined it with a Bayesian synthesis method. The framework provided in this paper includes a description of each step of knowledge translation in software engineering. Knowledge translation using Bayesian synthesis intends to provide a systematic approach towards contextualized, collaborative and consensus-driven application of research results. In conclusion, this paper contributes towards the application of knowledge translation in software engineering through the presented framework. © 2019 Association for Computing Machinery.

  • 119.
    Badampudi, Deepika
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Wohlin, Claes
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Gorschek, Tony
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Guidelines for Knowledge Translation in Software EngineeringIn: Article in journal (Refereed)
  • 120.
    Badampudi, Deepika
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Wohlin, Claes
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Petersen, Kai
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Experiences from Using Snowballing and Database Searches in Systematic Literature Studies2015Conference paper (Refereed)
    Abstract [en]

    Background: Systematic literature studies are commonly used in software engineering. There are two main ways of conducting the searches for these type of studies; they are snowballing and database searches. In snowballing, the reference list (backward snowballing - BSB) and citations (forward snowballing - FSB) of relevant papers are reviewed to identify new papers whereas in a database search, different databases are searched using predefined search strings to identify new papers. Objective: Snowballing has not been in use as extensively as database search. Hence it is important to evaluate its efficiency and reliability when being used as a search strategy in literature studies. Moreover, it is important to compare it to database searches. Method: In this paper, we applied snowballing in a literature study, and reflected on the outcome. We also compared database search with backward and forward snowballing. Database search and snowballing were conducted independently by different researchers. The searches of our literature study were compared with respect to the efficiency and reliability of the findings. Results: Out of the total number of papers found, snowballing identified 83% of the papers in comparison to 46% of the papers for the database search. Snowballing failed to identify a few relevant papers, which potentially could have been addressed by identifying a more comprehensive start set. Conclusion: The efficiency of snowballing is comparable to database search. It can potentially be more reliable than a database search however, the reliability is highly dependent on the creation of a suitable start set.

  • 121.
    Bai, Guohua
    Blekinge Institute of Technology, Faculty of Computing, Department of Creative Technologies.
    An Organic View of Prototyping in Information System Development2014In: 2014 IEEE 17th International Conference on Computational Science and Engineering (CSE) / [ed] Liu, X; ElBaz, D; Hsu, CH; Kang, K; Chen, W, ChengDu: IEEE, 2014, Vol. Article number 07023844, p. 1814-1818Conference paper (Refereed)
    Abstract [en]

    This paper presents an organic view of prototyping for managing dynamic factors involved in evolutionary design of information systems (IS). Those dynamic factors can be caused by, for example, continuing suggestions from users, changes in the technologies, and users-designers learning related stepwise progresses. Expanding the evolutionary prototyping to ‘start small and grow’, the organic view of prototyping proposes two prerequisites to do so, namely 1) a sustainable and adaptive ‘embryo’ – an organic structure of the future system, and 2) an embedded learning and feedback management that the actors of the system (users, designers, decision makers, administrators) can communicate with each other. An example of eHealth system design demonstrates how the prerequisites can be implemented.

  • 122.
    Bakhtyar, Shoaib
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Designing Electronic Waybill Solutions for Road Freight Transport2016Doctoral thesis, comprehensive summary (Other academic)
    Abstract [en]

    In freight transportation, a waybill is an important document that contains essential information about a consignment. The focus of this thesis is on a multi-purpose electronic waybill (e-Waybill) service, which can provide the functions of a paper waybill, and which is capable of storing, at least, the information present in a paper waybill. In addition, the service can be used to support other existing Intelligent Transportation System (ITS) services by utilizing on synergies with the existing services. Additionally, information entities from the e-Waybill service are investigated for the purpose of knowledge-building concerning freight flows.

    A systematic review on state-of-the-art of the e-Waybill service reveals several limitations, such as limited focus on supporting ITS services. Five different conceptual e-Waybill solutions (that can be seen as abstract system designs for implementing the e-Waybill service) are proposed. The solutions are investigated for functional and technical requirements (non-functional requirements), which can potentially impose constraints on a potential system for implementing the e-Waybill service. Further, the service is investigated for information and functional synergies with other ITS services. For information synergy analysis, the required input information entities for different ITS services are identified; and if at least one information entity can be provided by an e-Waybill at the right location we regard it to be a synergy. Additionally, a service design method has been proposed for supporting the process of designing new ITS services, which primarily utilizes on functional synergies between the e-Waybill and different existing ITS services. The suggested method is applied for designing a new ITS service, i.e., the Liability Intelligent Transport System (LITS) service. The purpose of the LITS service isto support the process of identifying when and where a consignment has been damaged and who was responsible when the damage occurred. Furthermore, information entities from e-Waybills are utilized for building improved knowledge concerning freight flows. A freight and route estimation method has been proposed for building improved knowledge, e.g., in national road administrations, on the movement of trucks and freight.

    The results from this thesis can be used to support the choice of practical e-Waybill service implementation, which has the possibility to provide high synergy with ITS services. This may lead to a higher utilization of ITS services and more sustainable transport, e.g., in terms of reduced congestion and emissions. Furthermore, the implemented e-Waybill service can be an enabler for collecting consignment and traffic data and converting the data into useful traffic information. In particular, the service can lead to increasing amounts of digitally stored data about consignments, which can lead to improved knowledge on the movement of freight and trucks. The knowledge may be helpful when making decisions concerning road taxes, fees, and infrastructure investments.

  • 123.
    Bakhtyar, Shoaib
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Ghazi, Ahmad Nauman
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    On Improving Research Methodology Course at Blekinge Institute of Technology2016Conference paper (Refereed)
    Abstract [en]

    The Research Methodology in Software Engineering and Computer Science (RM) is a compulsory course that must be studied by graduate students at Blekinge Institute of Technology (BTH) prior to undertaking their theses work. The course is focused on teaching research methods and techniques for data collection and analysis in the fields of Computer Science and Software Engineering. It is intended that the course should help students in practically applying appropriate research methods in different courses (in addition to the RM course) including their Master’s theses. However, it is believed that there exist deficiencies in the course due to which the course implementation (learning and assessment activities) as well as the performance of different participants (students, teachers, and evaluators) are affected negatively. In this article our aim is to investigate potential deficiencies in the RM course at BTH in order to provide a concrete evidence on the deficiencies faced by students, evaluators, and teachers in the course. Additionally, we suggest recommendations for resolving the identified deficiencies. Our findings gathered through semi-structured interviews with students, teachers, and evaluators in the course are presented in this article. By identifying a total of twenty-one deficiencies from different perspectives, we found that there exist critical deficiencies at different levels within the course. Furthermore, in order to overcome the identified deficiencies, we suggest seven recommendations that may be implemented at different levels within the course and the study program. Our suggested recommendations, if implemented, will help in resolving deficiencies in the course, which may lead to achieving an improved teaching and learning in the RM course at BTH. 

  • 124.
    Bakhtyar, Shoaib
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Henesey, Lawrence
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Electronic Waybill Solutions: A Systemtic ReviewIn: Journal of Special Topics in Information Technology and Management, ISSN 1385-951X, E-ISSN 1573-7667Article in journal (Other academic)
    Abstract [en]

    A critical component in freight transportation is the waybill, which is a transport document that has essential information about a consignment. Actors within the supply chain handle not only the freight but also vast amounts of information,which are often unclear due to various errors. An electronic waybill (e-Waybill) solution is an electronic replacement of the paper waybill in a better way, e.g., by ensuring error free storage and flow of information. In this paper, a systematic review using the snowball method is conducted to investigate the state-of-the-art of e-Waybill solutions. After performing three iterations of the snowball process,we identified eleven studies for further evaluation and analysis due to their strong relevancy. The studies are mapped in relation to each other and a classification of the e-Waybill solutions is constructed. Most of the studies identified from our review support the benefits of electronic documents including e-Waybills. Typically, most research papers reviewed support EDI (Electronic Documents Interchange) for implementing e-Waybills. However, limitations exist due to high costs that make it less affordable for small organizations. Recent studies point to alternative technologies that we have listed in this paper. Additionally in this paper, we present from our research that most studies focus on the administrative benefits, but few studies investigate the potential of e-Waybill information for achieving services, such as estimated time of arrival and real-time tracking and tracing.

  • 125.
    Bakhtyar, Shoaib
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Henesey, Lawrence
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Freight transport prediction using electronic waybills and machine learning2014In: 2014 International Conference on Informative and Cybernetics for Computational Social Systems, IEEE Computer Society, 2014, p. 128-133Conference paper (Refereed)
    Abstract [en]

    A waybill is a document that accompanies the freight during transportation. The document contains essential information such as, origin and destination of the freight, involved actors, and the type of freight being transported. We believe, the information from a waybill, when presented in an electronic format, can be utilized for building knowledge about the freight movement. The knowledge may be helpful for decision makers, e.g., freight transport companies and public authorities. In this paper, the results from a study of a Swedish transport company are presented using order data from a customer ordering database, which is, to a larger extent, similar to the information present in paper waybills. We have used the order data for predicting the type of freight moving between a particular origin and destination. Additionally, we have evaluated a number of different machine learning algorithms based on their prediction performances. The evaluation was based on their weighted average true-positive and false-positive rate, weighted average area under the curve, and weighted average recall values. We conclude, from the results, that the data from a waybill, when available in an electronic format, can be used to improve knowledge about freight transport. Additionally, we conclude that among the algorithms IBk, SMO, and LMT, IBk performed better by predicting the highest number of classes with higher weighted average values for true-positive and false-positive, and recall.

  • 126.
    Bakhtyar, Shoaib
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Holmgren, Johan
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    A Data Mining Based Method for Route and Freight Estimation2015In: Procedia Computer Science, Elsevier, 2015, Vol. 52, p. 396-403Conference paper (Refereed)
    Abstract [en]

    We present a method, which makes use of historical vehicle data and current vehicle observations in order to estimate 1) the route a vehicle has used and 2) the freight the vehicle carried along the estimated route. The method includes a learning phase and an estimation phase. In the learning phase, historical data about the movement of a vehicle and of the consignments allocated to the vehicle are used in order to build estimation models: one for route choice and one for freight allocation. In the estimation phase, the generated estimation models are used together with a sequence of observed positions for the vehicle as input in order to generate route and freight estimates. We have partly evaluated our method in an experimental study involving a medium-size Swedish transport operator. The results of the study indicate that supervised learning, in particular the algorithm Naive Bayes Multinomial Updatable, shows good route estimation performance even when significant amount of information about where the vehicle has traveled is missing. For the freight estimation, we used a method based on averaging the consignments on the historical known trips for the estimated route. We argue that the proposed method might contribute to building improved knowledge, e.g., in national road administrations, on the movement of trucks and freight.

  • 127.
    Bakhtyar, Shoaib
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Holmgren, Johan
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Persson, Jan A.
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Technical Requirements of the e-Waybill Service2016In: International Journal of Computer and Communication Engineering, ISSN 2010-3743, ISSN 2010-3743, Vol. 5, no 2, p. 130-140Article in journal (Refereed)
    Abstract [en]

    An electronic waybill (e-Waybill) is a service whose purpose is to replace the paper waybill, which is a paper documents that traditionally follows a consignment during transport. An important purpose of the e-Waybill is to achieve a paperless flow of information during freight transport. In this paper, we investigate five e-Waybill solutions, that is, system design specifications for the e-Waybill, regarding their non-functional (technical) requirements. In addition, we discuss how well existing technologies are able to fulfil the identified requirements. We have identified that information storage, synchronization and conflict management, access control, and communication are important categories of technical requirements of the e-Waybill service. We argue that the identified technical requirements can be used to support the process of designing and implementing the e-Waybill service.

  • 128.
    Bakhtyar, Shoaib
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Mbiydzenyuy, Gideon
    Netport Science Park, Karlshamn.
    Henesey, Lawrence
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    A Simulation Study of the Electronic Waybill Service2015In: Proceedings - EMS 2015: UKSim-AMSS 9th IEEE European Modelling Symposium on Computer Modelling and Simulation / [ed] David Al-Dabas, Gregorio Romero, Alessandra Orsoni, Athanasios Pantelous, IEEE Computer Society, 2015, p. 307-312Conference paper (Refereed)
    Abstract [en]

    We present results from a simulation study, whichwas designed for investigating the potential positive impacts, i.e., the invoicing and processing time, and financial savings,when using an electronic waybill instead of paper waybillsfor road-based freight transportation. The simulation modelis implemented in an experiment for three different scenarios,where the processing time for waybills at the freight loadingand unloading locations in a particular scenario differs fromother scenarios. The results indicate that a saving of 65%–99%in the invoicing time can be achieved when using an electronicwaybill instead of paper waybills. Our study can be helpful todecision makers, e.g., managers and staff dealing with paperwaybills, to estimate the potential benefits when making deci-sions concerning the implementation of an electronic waybillsolution for replacing paper waybills.

  • 129.
    Bala, Jaswanth
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Filtering estimated series of residential burglaries using spatio-temporal route calculations2016Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Context. According to Swedish National Council for Crime Prevention, there is an increase of 19% in residential burglary crimes in Sweden over the last decade and only 5% of the total crimes reported were actually solved by the law enforcement agencies. In order to solve these cases quickly and efficiently, the law enforcement agencies has to look into the possible linked serial crimes. Many studies have suggested to link crimes based on Modus Operendi and other characteristic. Sometimes crimes which are not possible to travel spatially with in the reported times but have similar Modus Operendi are also grouped as linked crimes. Investigating such crimes could possibly waste the resources of the law enforcement agencies.

    Objectives. In this study, we investigate the possibility of the usage of travel distance and travel duration between different crime locations while linking the residential burglary crimes. A filtering method has been designed and implemented for filtering the unlinked crimes from the estimated linked crimes by utilizing the distance and duration values.

    Methods. The objectives in this study are satisfied by conducting an experiment. The travel distance and travel duration values are obtained from various online direction services. The filtering method was first validated on ground truth represented by known linked crime series and then it was used to filter out crimes from the estimated linked crimes.

    Results. The filtering method had removed a total of 4% unlinked crimes from the estimated linked crime series when the travel mode is considered as driving. Whereas it had removed a total of 23% unlinked crimes from the estimated linked crime series when the travel mode is considered as walking. Also it was found that a burglar can take an average of 900 seconds (15 minutes) for committing a burglary.

    Conclusions. From this study it is evident that the usage of spatial and temporal values in linking residential burglaries gives effective crime links in a series. Also, the usage of Google Maps for getting distance and duration values can increase the overall performance of the filtering method in linking crimes.

  • 130.
    Ballesteros, Luis Guillermo Martinez
    et al.
    KTH Royal Inst Technol, Radio Syst Lab RSLab, Stockholm, Sweden..
    Ickin, Selim
    Ericsson Res, Stockholm, Sweden..
    Fiedler, Markus
    Blekinge Institute of Technology, Faculty of Computing, Department of Communication Systems.
    Markendahl, Jan
    KTH Royal Inst Technol, Radio Syst Lab RSLab, Stockholm, Sweden..
    Tollmar, Konrad
    KTH Royal Inst Technol, Radio Syst Lab RSLab, Stockholm, Sweden..
    Wac, Katarzyna
    Univ Copenhagen, DK-1168 Copenhagen, Denmark..
    Energy Saving Approaches for Video Streaming on Smartphone based on QoE Modeling2016In: 2016 13TH IEEE ANNUAL CONSUMER COMMUNICATIONS & NETWORKING CONFERENCE (CCNC), IEEE Communications Society, 2016Conference paper (Refereed)
    Abstract [en]

    In this paper, we study the influence of video stalling on QoE. We provide QoE models that are obtained in realistic scenarios on the smartphone, and provide energy-saving approaches for smartphone by leveraging the proposed QoE models in relation to energy. Results show that approximately 5J is saved in a 3 minutes video clip with an acceptable Mean Opinion Score (MOS) level when the video frames are skipped. If the video frames are not skipped, then it is suggested to avoid freezes during a video stream as the freezes highly increase the energy waste on the smartphones.

  • 131.
    Bandari Swamy Devender, Vamshi Krishna
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Adike, Sneha
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Design and Performance of an Event Handling and Analysis Platform for vSGSN-MME event using the ELK stack2019Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Data Logging is the main activity to be considered in maintaining a server or database in working condition without any errors or failures. Data collection can be automatic, so, no human presence is necessary. To store the data of logs for many days and visualizing became a huge problem in recent days. Coming to node SGSN-MME, which is the main component of the GPRS network, which handles all packet switched data within the mobile operator's network. A lot of log data is generated and stored in file systems on the redundant File Server Boards in SGSN-MME node. The evolution of the SGSN-MME is taking it from dedicated, purpose-built, hardware into virtual machines in the Cloud, where virtual file server boards fit very badly. The purpose of this thesis is to give a better way to store the log data and add visualization using the ELK stack concept. Fetching useful information from logs is one of the most important part of this stack and is being done in Logstash using its grok filters and a set of input, filter and output plug-ins which helps to scale this functionality for taking various kinds of inputs ( file,TCP, UDP, gemfire, stdin, UNIX, web sockets and even IRC and twitter and many more) , filter them using (groks,grep,date filters etc.)and finally write output to ElasticSearch. The Research Methodology involved in carrying out this thesis work is a Qualitative approach. A study is carried using the ELK concept with respect to Legacy approach in Ericsson company. A suitable approach and the better possible solution is given to the vSGSN-MME node to store the log data. Also to provide the performance and uses multiple users of input providers and provides the analysis of the graphs from the results and analysis. To perform the tests accurately, readings are taken in defined failure scenarios. From the test cases, a plot is provided on the CPU load in vSGSN-MME which easily gives the suitable and best promising way.

  • 132.
    Barry, Cecilia
    Blekinge Institute of Technology, Faculty of Computing, Department of Technology and Aesthetics.
    ROOTS: What could emerge out of thinking and acting networked roots as design?2017Independent thesis Basic level (degree of Bachelor), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Abstract

    ROOTS – What Could Emerge Out of Thinking and Acting Networked ROOTS as Design?

     

    This bachelor’s thesis uses ROOTS as a method designed to engage in both thinking and acting inside networks, by creating a hydroponic gardening network. As a designer one engages in many different fields of design. The most complicated design is designing networks with function, interlaced and embedded in everyday life. This is known as accountability, to be accountable to ones decisions and to act on many perspectives when designing. Accountability is designing from somewhere, and being aware of where that somewhere stems from. ROOTS visualizes accountability in a network, as accountability entails thinking and acting inside a network, and by doing so one actively engages in thinking about futures and design as a whole. When asking oneself what could emerge out of thinking and acting networked ROOTS as design, one begins to speculate in matters of vast networked complexity. From observation using methods such as ANT, the technologic extension T-ANT and also conducting a study in messiness, information is created and from the information, valuing becomes present, from valuing knowledge grows, from knowledge comes accountability and the network creates another cycle of ROOTS.

     

    Keywords: Design, Network, Accountability, Complexity

  • 133.
    Barysau, Mikalai
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Developers' performance analysis based on code review data: How to perform comparisons of different groups of developers2016Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Nowadays more and more IT companies switch to the distributed development model. This trend has a number of advantages and disadvantages, which are studied by researchers through different aspects of the modern code development. One of such aspects is code review, which is used by many companies and produces a big amount of data. A number of studies describe different data mining and data analysis approaches, which are based on a link between code review data and performance. According to these studies analysis of the code review data can give a good insight to the development performance and help software companies to detect a number of performance issues and improve the quality of their code.

    The main goal of this Thesis was to collect reported knowledge about the code review data analysis and implement a solution, which will help to perform such analysis in a real industrial setting.

    During the performance of the research the author used multiple research techniques, such as Snowballing literature review, Case study and Semi-structured interviews.

    The results of the research contain a list of code review data metrics, extracted from the literature and a software tool for collecting and visualizing data.

    The performed literature review showed that among the literature sources, related to the code review, relatively small amount of sources are related to the topic of the Thesis, which exposes a field for a future research. Application of the found metrics showed that most of the found metrics are possible to use in the context of the studied environment. Presentation of the results and interviews with company's representatives showed that the graphic plots are useful for observing trends and correlations in development of company's development sites and help the company to improve its performance and decision making process.

  • 134.
    Baskaravel, Yogaraj
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Implementation and evaluation of global router for Information-Centric Networking2014Independent thesis Advanced level (degree of Master (Two Years))Student thesis
    Abstract [en]

    Context. A huge majority of the current Internet traffic is information dissemination. Information-Centric Networking (ICN) is a future networking paradigm that focuses on global level information dissemination. In ICN, the communication is defined in terms of requesting and providing Named Data Objects (NDO). NetInf is a future networking architecture based on Information-Centric Networking principles. Objectives. In this thesis, a global routing solution for ICN has been implemented. The authority part of NDO's name is mapped to a set of routing hints each with a priority value. Multiple NDOs can share the same authority part and thus the first level aggregation is provided. The routing hints are used to forward a request for a NDO towards a suitable copy of the NDO. The second level aggregation is achieved by aggregating high priority routing hints on low priority routing hints. The performance and scalability of the routing implementation are evaluated with respect to global ICN requirements. Furthermore, some of the notable challenges in implementing global ICN routing are identified. Methods. The NetInf global routing solution is implemented by extending NEC's NetInf Router Platform (NNRP). A NetInf testbed is built over the Internet using the extended NNRP implementation. Performance measurements have been taken from the NetInf testbed. The performance measurements have been discussed in detail in terms of routing scalability. Results. The performance measurements show that hop-by-hop transport has significant impact on the overall request forwarding. A notable amount of time is taken for extracting and inserting binary objects such as routing hints at each router. Conclusions. A more suitable hop-by-hop transport mechanism can be evaluated and used with respect to global ICN requirements. The NetInf message structure can be redefined so that binary objects such as routing hints can be transmitted more efficiently. Apart from that, the performance of the global routing implementation appears to be reasonable. As the NetInf global routing solution provides two levels of aggregation, it can be scalable as well.

  • 135.
    Beer, Armin
    et al.
    BVA and Beer Test Consulting, AUT.
    Felderer, Michael
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Measuring and improving testability of system requirements in an industrial context by applying the goal question metric approach2018In: Proceedings - International Conference on Software Engineering, IEEE Computer Society , 2018, p. 25-32Conference paper (Refereed)
    Abstract [en]

    Testing is subject to two basic constraints, namely cost and quality. The cost depends on the efficiency of the testing activities as well as their quality and testability. The author's practical experience in large-scale systems shows that if the requirements are adapted iteratively or the architecture is altered, testability decreases. However, what is often lacking is a root cause analysis of the testability degradations and the introduction of improvement measures during software development. In order to introduce agile practices in the rigid strategy of the V-model, good testability of software artifacts is vital. So testability is also the bridgehead towards agility. In this paper, we report on a case study in which we measure and improve testability on the basis of the Goal Question Metric Approach. © 2018 ACM.

  • 136.
    Begnert, Joel
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Tilljander, Rasmus
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Combining Regional Time Stepping With Two-Scale PCISPH Method2015Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Context. In computer graphics, realistic looking fluid is often desired. Simulating realistic fluids is a time consuming and computationally expensive task, therefore, much research has been devoted to reducing the simulation time while maintaining the realism. Two of the more recent optimization algorithms within particle based simulations are two-scale simulation and regional time stepping (RTS). Both of them are based on the predictive-corrective incompressible smoothed particle hydrodynamics (PCISPH) algorithm.

    Objectives. These algorithms improve on two separate aspects of PCISPH, two-scale simulation reduces the number of particles and RTS focuses computational power on regions of the fluid where it is most needed. In this paper we have developed and investigated the performance of an algorithm combining them, utilizing both optimizations.

    Methods. We implemented both of the base algorithms, as well as PCISPH, before combining them. Therefore we had equal conditions for all algorithms when we performed our experiments, which consisted of measuring the time it took to run each algorithm in three different scene configurations.

    Results. Results showed that our combined algorithm on average was faster than the other three algorithms. However, our implementation of two-scale simulation gave results inconsistent with the original paper, showing a slower time than even PCISPH. This invalidates the results for our combined algorithm since it utilizes the same implementation.

    Conclusions. We see that our combined algorithm has potential to speed up fluid simulations, but since the two-scale implementation was incorrect, our results are inconclusive.

  • 137.
    Bengtsson, Daniel
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Creative Technologies.
    Melin, Johan
    Constrained procedural floor plan generation for game environments2016Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Background: Procedural content generation (PCG) has become an important subject as the demand for content in modern games has increased. Paradox Arctic is a game development studio that aims to be at the forefront of technological solutions and is therefore interested in furthering their knowledge in PCG. To this end, Paradox Arctic has expressed their interest in a collaborative effort to further explore the subject of procedural floor plan generation.

    Objective: The main goal of this work is to test whether a solution based on growth, subdivision or a combination thereof, can be used to procedurally generate believable and varied floor plans for game environments, while also conforming to predefined constraints.

    Method: A solution capable of generating floor plans with the use of growth, subdivision and a combination of both has been implemented and a survey testing the believability and variation of the generated layouts has been conducted.

    Results & Conclusions: While the results of the subdivision and combined solutions show that more work is necessary before the generated content can be considered believable, the growth based solution presents promising results in terms of believability when generating smaller to medium sized layouts. This believability does however come at the cost of variation.

  • 138.
    Bengtsson, Filip
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Technology and Aesthetics.
    Undin, Philip
    Blekinge Institute of Technology, Faculty of Computing, Department of Technology and Aesthetics.
    Moderna progression system, en kooperativ regress2015Independent thesis Basic level (degree of Bachelor), 20 credits / 30 HE creditsStudent thesis
    Abstract [sv]

    Digitala spel är ett av dagens största digitala medier där det dagligen släpps nya spel. En av de största spel-genrerna är multiplayer First-Person Shooter (FPS) spel. Inom multiplayer FPS-spel, där kooperativt lagsamarbete spelar stor roll, har det på senare år uppkommit en trend där de flesta av dessa spel innehåller progression system som kan vara direkt skadande för spelets kooperativa upplevelse. Genom att undersöka frågeställningen “Hur kan vi med hjälp av progression system göra moderna FPS-spel mer kooperativa?” har vi försökt ta fram hur spelutvecklare istället kan öka den kooperativa upplevelsen i sina spel. Genom att ha undersökt och diskuterat områden som system, mänsklig motivation samt kooperativ spelteori har vi fått fram belöningspåverkan av en spelares beteende. Med hjälp av vår tidigare forskning har vi grundligt analyserat och brutit ner två av marknadens största titlar inom multiplayer FPS-spel, vi kom fram till att med hjälp av ett genomtänkt progression system kan bidra med ökad fokus på den kooperativa aspekten av dessa spel.

  • 139.
    Berg, Wilhelm
    Blekinge Institute of Technology, Faculty of Computing, Department of Creative Technologies.
    Terränggenerering och dess påverkan på spelupplevelse2015Independent thesis Basic level (degree of Bachelor), 10 credits / 15 HE creditsStudent thesis
    Abstract [sv]

    Kontext. Inom speldesign är terräng ofta en viktig aspekt, särskilt i sammanhanget med spelare som ska aktivt interagera med terräng. Dess utformning och design kan både positivt och negativt påverka hur spelaren uppfattar spelet.

    Mål. I detta arbete beskrivs ett arbete i terränggenerering och om terräng kan påverka spelaren i ett interaktivt media för att få bättre förståelse inom ämnet. Har terrängen en påverkan på hur spelaren uppfattar situationer i spel samt deras sätt att spela? Kan den påverka om de uppfattar upplevelsen som negativ eller positiv? Vad är mest påverkande för en spelare och hur?

    Metoder. I arbetet kommer slutsatser och arbetssätt beskrivas tillsammans med data insamlad från ett praktiskt test. Designen för det spel som används för att testa kommer även att beskrivas. I arbetets testande låter vi deltagare spela ett spel som använder sig av en algoritm för att skapa terräng. Efter testet kommer spelare svara på frågor om testet.

    Resultat. Från testandet får vi in svar som används för att nå vissa slutsatser.

    Slutsatser. Från testets resultat kommer vi dra slutsatsen att terräng verkligen kan ha en påverkan på spelarupplevelsen och att terräng kan påverka när den har störst aktivt inverkan på hur spelaren interagerar med spelet.

  • 140.
    Bergman, Martin
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Technology and Aesthetics.
    Jönsson, André
    Blekinge Institute of Technology, Faculty of Computing, Department of Technology and Aesthetics.
    Physically based rendering: Ur en 3D-grafikers perspektiv2014Independent thesis Basic level (degree of Bachelor)Student thesis
    Abstract [sv]

    Det här kandidatarbetet undersöker hur Physically based rendering kan påverkar en 3D-grafikers arbete. Målet är att skapa förståelse kring physically based rendering och hur denna tekniken kan komma att påverkar en 3D-grafikers arbete. För att undersöka problemområdet skapades en virtuell miljö i 3D med hjälp av physically based rendering. Arbetsupplevelsen jämfördes senare med det tidigare arbetssättet. Undersökningen beskriver tidigare arbetsätt och hur arbetet har ändrats med physically based rendering. Undersökningen går även igenom fördelar, nackdelar och konsekvenser med att arbeta med Physically based rendering. This bachelor thesis studies how Physically based rendering can affect a 3D artists work. The aim is to create an understanding of Physically Based Rendering and how this technique might affect a 3D artists work. To examine the problem area we created a virtual environment in 3D using Physically Based Rendering. The new workflow was then compared with the former workflow. The study describes the former workflow and how the work has changed with Physically Based Rendering. This thesis also covers the pros, cons and implications of working with Physically Based Rendering.

  • 141.
    Bergman Martinkauppi, Louise
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    He, Qiuping
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Performance Evaluation and Comparison of Standard Cryptographic Algorithms and Chinese Cryptographic Algorithms2019Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Background. China is regulating the import, export, sale, and use of encryption technology in China. If any foreign company wants to develop or release a product in China, they need to report their use of any encryption technology to the Office of State Commercial Cryptography Administration (OSCCA) to gain approval. SM2, SM3, and SM4 are cryptographic standards published by OSCCA and are authorized to be used in China. To comply with Chinese cryptography laws organizations and companies may have to replace standard cryptographic algorithms in their systems with Chinese cryptographic algorithms, such as SM2, SM3, and SM4. It is important to know beforehand how the replacement of algorithms will impact performance to determine future system costs. Objectives. Perform a theoretical study and performance comparison of the standard cryptographic algorithms and Chinese Cryptographic algorithms. The standard cryptographic algorithms studied are RSA, ECDSA, SHA-256, and AES-128, and the Chinese cryptographic algorithms studied are SM2, SM3, and SM4. Methods. A literature analysis was conducted to gain knowledge and collect information about the selected cryptographic algorithms in order to make a theoretical comparison of the algorithms. An experiment was conducted to get measurements of how the algorithms perform and to be able to rate them. Results. The literature analysis provides a comparison that identifies design similarities and differences between the algorithms. The controlled experiment provides measurements of the metrics of the algorithms mentioned in objectives. Conclusions. The conclusions are that the digital signature algorithms SM2 and ECDSA have similar design and also similar performance. SM2 and RSA have fundamentally different designs, and SM2 performs better than RSA when generating keys and signatures. When verifying signatures, RSA shows comparable performance in some cases and worse performance in other cases. Hash algorithms SM3 and SHA-256 have many design similarities, but SHA-256 performs slightly better than SM3. AES-128 and SM4 have many similarities but also a few differences. In the controlled experiment, AES-128 outperforms SM4 with a significant margin.

  • 142.
    Bergsten, John
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Creative Technologies.
    Öhman, Konrad
    Blekinge Institute of Technology, Faculty of Computing, Department of Creative Technologies.
    Player Analysis in Computer Games Using Artificial Neural Networks2017Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Star Vault AB is a video game development company that has developed the video game Mortal Online. The company has stated that they believe that players new to the game repeatedly find themselves being lost in the game. The objective of this study is to evaluate whether or not an Artificial Neural Network can be used to evaluate when a player is lost in the game Mortal Online. This is done using the free open source library Fast Artifical Neural Network Library. People are invited to a data collection event where they play a tweaked version of the game to facilitate data collection. Players specify whether they are lost or not and the data collected is flagged accordingly. The collected data is then prepared with different parameters to be used when training multiple Artificial Neural Networks. When creating an Artificial Neural Network there exists several parameters which have an impact on its performance. Performance is defined as the balance of high prediction accuracy against low false positive rate. These parameters vary depending on the purpose of the Artificial Neural Network. A quantitative approach is followed where these parameters are varied to investigate which values result in the Artificial Neural Network which best identifies when a player is lost. The parameters are grouped into stages where all combinations of parameter values within each stage are evaluated to reduce the amount of Artificial Neural Networks which have to be trained, with the best performing parameters of each stage being used in subsequent stages. The result is a set of values for the parameters that are considered as ideal as possible. These parameter values are then altered one at a time to verify that they are ideal. The results show that a set of parameters exist which can optimize the Artificial Neural Network model to identify when a player is lost, however not with the high performance that was hoped for. It is theorized that the ambiguity of the word "lost" and the complexity of the game are critical to the low performance.

  • 143.
    Berntsson, Fredrik
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Schengensamarbetet – Europas dröm2014Student thesis
    Abstract [sv]

    Denna uppsats klargör vad Schengensamarbetet är för något, varför det finns och hur det fungerar. Uppsatsen går igenom alla delar av samarbetet som till synes största del består av att avskaffa personkontrollerna mellan medlemsländerna.

  • 144.
    Berntsson Svensson, Richard
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Gorschek, Tony
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Bengtsson, PerOlof
    Widerberg, Jonas
    Telenor, SWE.
    BAM: backlog assessment method2019In: Lect. Notes Bus. Inf. Process., Springer Verlag , 2019, Vol. 355, p. 53-68Conference paper (Refereed)
    Abstract [en]

    The necessity of software as stand-alone products, and as central parts of non-traditional software products have changed how software products are developed. It started with the introduction of the agile manifesto and has resulted in a change of how software process improvements (SPI) are conducted. Although there are agile SPI methods and several agile practices for evaluating and improving current processes and ways-of-working, no method or practices for evaluating the backlog exists. To address this gap, the Backlog Assessment Method (BAM) was developed and applied in collaboration with Telenor Sweden. BAM enables agile organizations to assess backlogs, and assure that the backlog items are good-enough for their needs and well aligned with the decision process. The results from the validation show that BAM is feasible and relevant in an industrial environment, and it indicates that BAM is useful as a tool to perform analysis of items in a specific backlog. © The Author(s) 2019.

  • 145.
    Berntsson Svensson, Richard
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Regnell, Björn
    Lunds universitet, SWE.
    Is role playing in Requirements Engineering Education increasing learning outcome?2017In: Requirements Engineering, ISSN 0947-3602, E-ISSN 1432-010X, Vol. 22, no 4, p. 475-489Article in journal (Refereed)
    Abstract [en]

    Requirements Engineering has attracted a great deal of attention from researchers and practitioners in recent years. This increasing interest requires academia to provide students with a solid foundation in the subject matter. In Requirements Engineering Education (REE), it is important to cover three fundamental topics: traditional analysis and modeling skills, interviewing skills for requirements elicitation, and writing skills for specifying requirements. REE papers report about using role playing as a pedagogical tool; however, there is a surprising lack of empirical evidence on its utility. In this paper we investigate whether a higher grade in a role playing project have an effect on students’ score in an individual written exam in a Requirements Engineering course. Data are collected from 412 students between the years of 2007 and 2014 at Lund University and Chalmers | University of Gothenburg. The results show that students who received a higher grade in the role playing project scored statistically significant higher in the written exam compared to the students with a lower role playing project grade. © 2016 Springer-Verlag London

  • 146.
    Berntsson Svensson, Richard
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Taghavianfar, Maryam
    Selecting creativity techniques for creative requirements: An evaluation of four techniques using creativity workshops2015In: 2015 IEEE 23RD INTERNATIONAL REQUIREMENTS ENGINEERING CONFERENCE (RE), IEEE, 2015, p. 66-75Conference paper (Refereed)
    Abstract [en]

    Requirements engineering is recognized as a creative process where stakeholders jointly discover new creative ideas for innovative and novel products that eventually are expressed as requirements. This paper evaluates four different creativity techniques, namely Hall of Fame, Constraint Removal, Brainstorming, and Idea Box, using creativity workshops with students and industry practitioners. In total, 34 creativity workshops were conducted with 90 students from two universities, and 86 industrial practitioners from six companies. The results from this study indicate that Brainstorming can generate by far the most ideas, while Hall of Fame generates most creative ideas. Idea Box generates the least number of ideas, and the least number of creative ideas. Finally, Hall of Fame was the technique that led to the most number of requirements that was included in future releases of the products.

  • 147.
    Bertoni, Alessandro
    et al.
    Blekinge Institute of Technology, Faculty of Engineering, Department of Mechanical Engineering.
    Dasari, Siva Krishna
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Hallstedt, Sophie
    Blekinge Institute of Technology, Faculty of Engineering, Department of Strategic Sustainable Development.
    Petter, Andersson
    GKN Aerospace Systems , SWE.
    Model-based decision support for value and sustainability assessment: Applying machine learning in aerospace product development2018In: DS92: Proceedings of the DESIGN 2018 15th International Design Conference / [ed] Marjanović D., Štorga M., Škec S., Bojčetić N., Pavković N, The Design Society, 2018, Vol. 6, p. 2585-2596Conference paper (Refereed)
    Abstract [en]

    This paper presents a prescriptive approach toward the integration of value and sustainability models in an automated decision support environment enabled by machine learning (ML). The approach allows the concurrent multidimensional analysis of design cases complementing mechanical simulation results with value and sustainability assessment. ML allows to deal with both qualitative and quantitative data and to create surrogate models for quicker design space exploration. The approach has been developed and preliminary implemented in collaboration with a major aerospace sub-system manufacturer.

  • 148.
    Bertoni, Alessandro
    et al.
    Blekinge Institute of Technology, Faculty of Engineering, Department of Mechanical Engineering. Blekinge Institute of Technology.
    Hallstedt, Sophie
    Blekinge Institute of Technology, Faculty of Engineering, Department of Strategic Sustainable Development. Blekinge Institute of Technology.
    Dasari, Siva Krishna
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science. Blekinge Institute of Technology.
    Andersson, Petter
    GKN Aerospace Engine Systems, SWE.
    Integration of Value and Sustainability Assessment in Design Space Exploration by Machine Learning: An Aerospace Application2019In: Design ScienceArticle in journal (Refereed)
  • 149.
    Betz, Stefanie
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Andreas, Oberweis
    Rolf, Stephan
    Knowledge transfer in offshore outsourcing software development projects: an analysis of the challenges and solutions from German clients2014In: Expert systems (Print), ISSN 0266-4720, E-ISSN 1468-0394, Vol. 31, no 3Article in journal (Refereed)
    Abstract [en]

    Knowledge transfer is a critical factor in ensuring the success of offshore outsourcing software development projects and is, in many cases, neglected. Compared to in-house or co-located projects, however, such globally distributed projects feature far greater complexity. In addition to language barriers, factors such as cultural differences, time zone variance, distinct methods and practices, as well as unique equipment and infrastructure can all lead to problems that negatively impact knowledge transfer, and as a result, a project's overall success. In order to help minimise such risks to knowledge transfer, we conducted a research study based on expert interviews in six projects. Our study used German clients and focused on offshore outsourcing software development projects. We first identified known problems in knowledge transfer that can occur with offshore outsourcing projects. Then we collected best-practice solutions proven to overcome the types of problems described. Afterward, we conducted a follow-up study to evaluate our findings. In this subsequent stage, we presented our findings to a different group of experts in five projects and asked them to evaluate these solutions and recommendations in terms of our original goal, namely to find ways to minimise knowledge-transfer problems in offshore outsourcing software development projects. Thus, the result of our study is a catalog of evaluated solutions and associated recommendations mapped to the identified problem areas.

  • 150. Beyene, Ayne A.
    et al.
    Welemariam, Tewelle
    Persson, Marie
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Lavesson, Niklas
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Improved concept drift handling in surgery prediction and other applications2015In: Knowledge and Information Systems, ISSN 0219-1377, Vol. 44, no 1, p. 177-196Article in journal (Refereed)
    Abstract [en]

    The article presents a new algorithm for handling concept drift: the Trigger-based Ensemble (TBE) is designed to handle concept drift in surgery prediction but it is shown to perform well for other classification problems as well. At the primary care, queries about the need for surgical treatment are referred to a surgeon specialist. At the secondary care, referrals are reviewed by a team of specialists. The possible outcomes of this review are that the referral: (i) is canceled, (ii) needs to be complemented, or (iii) is predicted to lead to surgery. In the third case, the referred patient is scheduled for an appointment with a surgeon specialist. This article focuses on the binary prediction of case three (surgery prediction). The guidelines for the referral and the review of the referral are changed due to, e.g., scientific developments and clinical practices. Existing decision support is based on the expert systems approach, which usually requires manual updates when changes in clinical practice occur. In order to automatically revise decision rules, the occurrence of concept drift (CD) must be detected and handled. The existing CD handling techniques are often specialized; it is challenging to develop a more generic technique that performs well regardless of CD type. Experiments are conducted to measure the impact of CD on prediction performance and to reduce CD impact. The experiments evaluate and compare TBE to three existing CD handling methods (AWE, Active Classifier, and Learn++) on one real-world dataset and one artificial dataset. TBA significantly outperforms the other algorithms on both datasets but is less accurate on noisy synthetic variations of the real-world dataset.

1234567 101 - 150 of 1676
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf