Change search
Refine search result
1234 51 - 100 of 165
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Rows per page
  • 5
  • 10
  • 20
  • 50
  • 100
  • 250
Sort
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
Select
The maximal number of hits you can export is 250. When you want to export more records please use the Create feeds function.
  • 51.
    Gaddam, Yeshwanth Reddy
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Sales Forecasting of Truck Components using Neural Networks2020Independent thesis Advanced level (degree of Master (One Year)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Background: Sales Forecasting plays a substantial role in identifying the sales trends of products for the future era in any organization. These forecasts are also important for determining the profitable retail operations to meet customer demand, maintain storage levels and to identify probable losses.

    Objectives: This study is to investigate appropriate machine learning algorithms for forecasting the sales of truck components and then conduct experiments to forecast sales with the selected machine learning algorithms and to evaluate the performances of the models using performance metrics obtained from the literature review.

    Methods: Initially, a literature review is performed to identify machine learning methods suitable for forecasting the sales of truck components and then based on the results obtained, several experiments were conducted to evaluate the performances of the chosen models.

    Results: Based on the literature review Multilayer Perceptron (MLP), RecurrentNeural Network (RNN) and Long Short Term Memory (LSTM) have been selected for forecasting the sales of truck components and results from the experiments showed that LSTM performed well compared to MLP and RNN for predicting sales.

    Conclusions: From this research, It can be stated that LSTM can model com-plex nonlinear functions compared to MLP and RNN for the chosen dataset. Hence, LSTM is chosen as the ideal model for predicting sales of truck components.

    Download full text (pdf)
    Sales Forecasting of ...
  • 52.
    García Martín, Eva
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Energy Efficiency in Machine Learning: Approaches to Sustainable Data Stream Mining2020Doctoral thesis, comprehensive summary (Other academic)
    Abstract [en]

    Energy efficiency in machine learning explores how to build machine learning algorithms and models with low computational and power requirements. Although energy consumption is starting to gain interest in the field of machine learning, still the majority of solutions focus on obtaining the highest predictive accuracy, without a clear focus on sustainability.

    This thesis explores green machine learning, which builds on green computing and computer architecture to design sustainable and energy efficient machine learning algorithms. In particular, we investigate how to design machine learning algorithms that automatically learn from streaming data in an energy efficient manner.

    We first illustrate how energy can be measured in the context of machine learning, in the form of a literature review and a procedure to create theoretical energy models. We use this knowledge to analyze the energy footprint of Hoeffding trees, presenting an energy model that maps the number of computations and memory accesses to the main functionalities of the algorithm. We also analyze the hardware events correlated to the execution of the algorithm, their functions and their hyper parameters.

    The final contribution of the thesis is showcased by two novel extensions of Hoeffding tree algorithms, the Hoeffding tree with nmin adaptation and the Green Accelerated Hoeffding Tree. These solutions are able to reduce their energy consumption by twenty and thirty percent, with minimal effect on accuracy. This is achieved by setting an individual splitting criteria for each branch of the decision tree, spending more energy on the fast growing branches and saving energy on the rest.

    This thesis shows the importance of evaluating energy consumption when designing machine learning algorithms, proving that we can design more energy efficient algorithms and still achieve competitive accuracy results.

    Download full text (pdf)
    Spikblad
    Download full text (pdf)
    fulltext
  • 53.
    García Martín, Eva
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Rodrigues, Crefeda Faviola
    University of Manchester, GBR.
    Riley, Graham
    University of Manchester, GBR.
    Grahn, Håkan
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Estimation of energy consumption in machine learning2019In: Journal of Parallel and Distributed Computing, ISSN 0743-7315, E-ISSN 1096-0848, Vol. 134, p. 75-88Article in journal (Refereed)
    Abstract [en]

    Energy consumption has been widely studied in the computer architecture field for decades. While the adoption of energy as a metric in machine learning is emerging, the majority of research is still primarily focused on obtaining high levels of accuracy without any computational constraint. We believe that one of the reasons for this lack of interest is due to their lack of familiarity with approaches to evaluate energy consumption. To address this challenge, we present a review of the different approaches to estimate energy consumption in general and machine learning applications in particular. Our goal is to provide useful guidelines to the machine learning community giving them the fundamental knowledge to use and build specific energy estimation methods for machine learning algorithms. We also present the latest software tools that give energy estimation values, together with two use cases that enhance the study of energy consumption in machine learning.

    Download full text (pdf)
    fulltext
  • 54.
    García-Martín, Eva
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Bifet, Albert
    Télécom ParisTech.
    Lavesson, Niklas
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Energy Modeling of Hoeffding Tree EnsemblesIn: Intelligent Data Analysis, ISSN 1088-467X, E-ISSN 1571-4128Article in journal (Refereed)
    Abstract [en]

    Energy consumption reduction has been an increasing trend in machine learning over the past few years due to its socio-ecological importance. In new challenging areas such as edge computing, energy consumption and predictive accuracy are key variables during algorithm design and implementation. State-of-the-art ensemble stream mining algorithms are able to create highly accurate predictions at a substantial energy cost. This paper introduces the nmin adaptation method to ensembles of Hoeffding tree algorithms, to further reduce their energy consumption without sacrificing accuracy. We also present extensive theoretical energy models of such algorithms, detailing their energy patterns and how nmin adaptation affects their energy consumption. We have evaluated the energy efficiency and accuracy of the nmin adaptation method on five different ensembles of Hoeffding trees under 11 publicly available datasets. The results show that we are able to reduce the energy consumption significantly, by 21 % on average, affecting accuracy by less than one percent on average.

  • 55.
    García-Martín, Eva
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Bifet, Albert
    Télécom Paris.
    Lavesson, Niklas
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Green Accelerated Hoeffding TreeManuscript (preprint) (Other academic)
    Abstract [en]

    For the past years, the main concern in machine learning had been to create highly accurate models, without considering the high computational requirements involved. Stream mining algorithms are able to produce highly accurate models in real time, without strong computational demands. This is the case of the Hoeffding tree algorithm. Recent extensions to this algorithm, such as the Extremely Very Fast Decision Tree (EFDT), focus on increasing predictive accuracy, but at the cost of a higher energy consumption. This paper presents the Green Accelerated Hoeffding Tree (GAHT) algorithm, which is able to achieve same levels of accuracy as the latest EFDT, while reducing its energy consumption by 27 percent with minimal effect on accuracy.

  • 56.
    García-Martín, Eva
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Lavesson, Niklas
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Grahn, Håkan
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Casalicchio, Emiliano
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Boeva, Veselka
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Energy-Aware Very Fast Decision TreeIn: Journal of Data Science and Analytics, ISSN 2364-415XArticle in journal (Refereed)
    Abstract [en]

    Recently machine learning researchers are designing algorithms that can run in embedded and mobile devices, which introduces additional constraints compared to traditional algorithm design approaches. One of these constraints is energy consumption, which directly translates to battery capacity for these devices. Streaming algorithms, such as the Very Fast Decision Tree (VFDT), are designed to run in such devices due to their high velocity and low memory requirements. However, they have not been designed with an energy efficiency focus. This paper addresses this challenge by presenting the nmin adaptation method, which reduces the energy consumption of the VFDT algorithm with only minor effects on accuracy. nmin adaptation allows the algorithm to grow faster in those branches where there is more confidence to create a split, and delays the split on the less confident branches. This removes unnecessary computations related to checking for splits but maintains similar levels of accuracy. We have conducted extensive experiments on 29 public datasets, showing that the VFDT with nmin adaptation consumes up to 31% less energy than the original VFDT, and up to 96% less energy than the CVFDT (VFDT adapted for concept drift scenarios), trading off up to 1.7 percent of accuracy.

  • 57.
    Ginka, Anusha
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Salapu, Venkata Satya Sameer
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Optimization of Packet Throughput in Docker Containers2019Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Container technology has gained popularity in recent years, mainly because it enables a fast and easy way to package, distribute and deploy applications and services. Latency and throughput have a high impact on user satisfaction in many real-time, critical and large-scale online services. Although the use of microservices architecture in cloud-native applications has enabled advantages in terms of application resilience, scalability, fast software delivery and the use of minimal resources, the packet processing rates are not correspondingly higher. This is mainly due to the overhead imposed by the design and architecture of the network stack. Packet processing rates can be improved by making changes to the network stack and without necessarily adding more powerful hardware.

    In this research, a study of various high-speed packet processing frameworks is presented and a software high-speed packet I/O solution i.e., as hardware agnostic as possible to improve the packet throughput in container technology is identified. The proposed solution is identified based on if the solution involves making changes to the underlying hardware or not. The proposed solution is then evaluated in terms of packet throughput for different container networking modes. A comparison of the proposed solution with a simple UDP client-server application is also presented for different container networking modes. From the results obtained, it is concluded that packet mmap client-server application has higher performance when compared with simple UDP client-server application.

    Download full text (pdf)
    BTH2019Ginka
  • 58.
    Goswami, Prashant
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science. BTH.
    Interactive animation of single-layer cumulus clouds using cloudmap2019In: Eurographics Proceedings STAG: Smart Tools and Applications in Graphics (2019) / [ed] M. Agus, M. Corsini and R. Pintus, Eurographics - European Association for Computer Graphics, 2019Conference paper (Refereed)
    Abstract [en]

    In this paper, we present a physics-driven procedural method for the interactive animation of realistic, single-layered cumulus clouds for the landscape-scale size. Our method employs the coarse units called parcels for the physics simulation and achieves procedural micro-level volumetric amplification based on the macro physics parameters. However, contrary to the previous methods which achieve amplification directly inside the parcels, we make use of the two-dimensional texture called cloud mapsto this end. This not only improves the shape and distribution of the cloud cover over the landscape but also boosts the animation efficiency significantly, allowing the overall approach to run at high frame rates, which is verified by the experiments presented in the paper.

    Download full text (pdf)
    fulltext
  • 59.
    Goswami, Prashant
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science. BTH.
    Markowicz, Christian
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Hassan, Ali
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Real-time particle-based snow simulation on the GPU2019In: Eurographics Symposium on Parallel Graphics and Visualization / [ed] Hank Childs and Stefan Frey, Porto: Eurographics - European Association for Computer Graphics, 2019Conference paper (Refereed)
    Abstract [en]

    This paper presents a novel real-time particle-based method for simulating snow on the GPU. Our method captures compressionand bonding between snow particles, and incorporates the thermodynamics to model the realistic behavior of snow. Thepresented technique is computationally inexpensive, and is capable of supporting rendering in addition to physics simulation athigh frame rates. The method is completely parallel and is implemented using CUDA. High efficiency and its simplicity makesour method an ideal candidate for integration in existing game SDK frameworks.

    Download full text (pdf)
    fulltext
  • 60.
    Gummesson, Simon
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Johnson, Mikael
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Parallel Construction of Local Clearance Triangulations2019Independent thesis Advanced level (degree of Master (One Year)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    The usage of navigation meshes for path planning in games and otherdomains is a common approach. One type of navigation mesh that recently has beendeveloped is the Local Clearance Triangulation (LCT). The overall aim of the LCT isto construct a triangulation in such a way that a property called theLocal Clearancecan be used to calculate a path in a more efficient and cheap way. At the time ofwriting the thesis there only exists one solution that creates an LCT, this solution isonly using the CPU. Since the process of creating an LCT involves the insertion ofmany points and edge flips which only affects a local area it would be interesting toinvestigate the potential performance gain of using the GPU.

    The objective of the thesis is to develop a GPU version based on thecurrent CPU LCT solution and to investigate in which cases the proposed GPU al-gorithm performs better.

    A GPU version and a CPU version of the proposed algorithm has beendeveloped to measure the performance gain of using the GPU, there are no algorith-mic differences between these versions. To measure the performance of the algorithmtwo tests have been constructed, the first test is called the Object Insertion test andmeasures the time it takes to build an LCT using generated test maps. The sec-ond test is called the Internal test and measures the internal performance of thealgorithm. A comparison between the GPU algorithm with an LCT library called Triplanner was also done.

    The proposed algorithm performed better on larger maps when imple-mented on a GPU compared to a CPU implementation of the algorithm. The GPU performance compared to the Triplanner was faster in some of the larger maps.

    An algorithm that builds an LCT from scratch is presented. Theresults show that using the proposed algorithm on the GPU substantially increasesthe performance of the algorithm compared to when implementing it on a CPU.

    Download full text (pdf)
    fulltext
  • 61.
    Gummesson, Simon
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Johnson, Mikael
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Parallel Construction of Local Clearance Triangulations2019Independent thesis Advanced level (degree of Master (One Year)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    The usage of navigation meshes for path planning in games and otherdomains is a common approach. One type of navigation mesh that recently has beendeveloped is the Local Clearance Triangulation (LCT). The overall aim of the LCT isto construct a triangulation in such a way that a property called theLocal Clearancecan be used to calculate a path in a more efficient and cheap way. At the time ofwriting the thesis there only exists one solution that creates an LCT, this solution isonly using the CPU. Since the process of creating an LCT involves the insertion ofmany points and edge flips which only affects a local area it would be interesting toinvestigate the potential performance gain of using the GPU.Objectives.The objective of the thesis is to develop a GPU version based on thecurrent CPU LCT solution and to investigate in which cases the proposed GPU al-gorithm performs better.Methods.A GPU version and a CPU version of the proposed algorithm has beendeveloped to measure the performance gain of using the GPU, there are no algorith-mic differences between these versions. To measure the performance of the algorithmtwo tests have been constructed, the first test is called the Object Insertion test andmeasures the time it takes to build an LCT using generated test maps. The sec-ond test is called the Internal test and measures the internal performance of thealgorithm. A comparison between the GPU algorithm with an LCT library calledTriplanner was also done.Results.The proposed algorithm performed better on larger maps when imple-mented on a GPU compared to a CPU implementation of the algorithm. The GPUperformance compared to the Triplanner was faster in some of the larger maps.Conclusions.An algorithm that builds an LCT from scratch is presented. Theresults show that using the proposed algorithm on the GPU substantially increasesthe performance of the algorithm compared to when implementing it on a CPU.

    Download full text (pdf)
    fulltext
  • 62.
    Gummesson, Simon
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Johnson, Mikael
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Parallel Construction of LocalClearance TriangulationsIndependent thesis Advanced level (degree of Master (One Year)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    The usage of navigation meshes for path planning in games and otherdomains is a common approach. One type of navigation mesh that recently has beendeveloped is the Local Clearance Triangulation (LCT). The overall aim of the LCT isto construct a triangulation in such a way that a property called theLocal Clearancecan be used to calculate a path in a more efficient and cheap way. At the time ofwriting the thesis there only exists one solution that creates an LCT, this solution isonly using the CPU. Since the process of creating an LCT involves the insertion ofmany points and edge flips which only affects a local area it would be interesting toinvestigate the potential performance gain of using the GPU.Objectives.The objective of the thesis is to develop a GPU version based on thecurrent CPU LCT solution and to investigate in which cases the proposed GPU al-gorithm performs better.Methods.A GPU version and a CPU version of the proposed algorithm has beendeveloped to measure the performance gain of using the GPU, there are no algorith-mic differences between these versions. To measure the performance of the algorithmtwo tests have been constructed, the first test is called the Object Insertion test andmeasures the time it takes to build an LCT using generated test maps. The sec-ond test is called the Internal test and measures the internal performance of thealgorithm. A comparison between the GPU algorithm with an LCT library calledTriplanner was also done.Results.The proposed algorithm performed better on larger maps when imple-mented on a GPU compared to a CPU implementation of the algorithm. The GPUperformance compared to the Triplanner was faster in some of the larger maps.Conclusions.An algorithm that builds an LCT from scratch is presented. Theresults show that using the proposed algorithm on the GPU substantially increasesthe performance of the algorithm compared to when implementing it on a CPU.

    Download full text (pdf)
    fulltext
  • 63.
    Guo, Yang
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Heterogeneous Knowledge Sharing in eHealth: Modeling, Validation and Application2019Doctoral thesis, comprehensive summary (Other academic)
    Abstract [en]

    Knowledge sharing has become an important issue in the eHealth field for improving the quality of healthcare service. However, since eHealth subject is a multidisciplinary and cross-organizational area, knowledge sharing is a serious challenge when it comes to developing eHealth systems. Thus, this thesis studies the heterogeneous knowledge sharing in eHealth and proposes a knowledge sharing ontology. The study consists of three main parts: modeling, validation and application.

    In the modeling part, knowledge sharing in eHealth is studied from two main aspects: the first aspect is the heterogeneous knowledge of different healthcare actors, and the second aspect is the interactivities among various healthcare actors. In this part, the contribution is to propose an Activity Theory based Ontology (ATO) model to highlight and represent these two aspects of eHealth knowledge sharing, which is helpful for designing efficient eHealth systems.

    In the validation part, a questionnaire based survey is conducted to practically validate the feasibility of the proposed ATO model. The survey results are analyzed to explore the effectiveness of the proposed model for designing efficient knowledge sharing in eHealth. Further, a web based software prototype is constructed to validate the applicability of the ATO model for practical eHealth systems. In this part, the contribution is to explore and show how the proposed ATO model can be validated.

    In the application part, the importance and usefulness of applying the proposed ATO model to solve two real problems are addressed. These two problems are healthcare decision making and appointment scheduling. There is a similar basic challenge in both these problems: a healthcare provider (e.g., a doctor) needs to provide optimal healthcare service (e.g., suitable medicine or fast treatment) to a healthcare receiver (e.g., a patient). Here, the optimization of the healthcare service needs to be achieved in accordance with eHealth knowledge which is distributed in the system and needs to be shared, such as the doctor’s competence, the patient’s health status, and priority control on patients’ diseases. In this part, the contribution is to propose a smart system called eHealth Appointment Scheduling System (eHASS) based on ATO model.

    This research work has been presented in eight conference and journal papers, which, along with an introductory chapter, are included in this compilation thesis.

    Download full text (pdf)
    fulltext
  • 64.
    Guo, Yang
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science. Blekinge institute of Technology.
    Yao, Yong
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    On Performance of Prioritized Appointment Scheduling for Healthcare2019In: Journal of Service Science and Management, ISSN 1940-9893, E-ISSN 1940-9907, Vol. 12, p. 589-604Article in journal (Refereed)
    Abstract [en]

    Designing the appointment scheduling is a challenging task for the development of healthcare system. The efficient solution approach can provide high-quality healthcare service between care providers (CP)s and care receivers (CR)s. In this paper, we consider the healthcare system with the heterogeneous CRs in terms of urgent and routine CRs. Our suggested model assumes that the system gives the service priority to the urgent CRs by allowing them to interrupt the ongoing routine appointments. An appointment handoff scheme is suggested for the interrupted routine appointments, and thus the routine CRs can attempt to re-establish the appointment scheduling with other available CPs. With these considerations, we study the scheduling performance of the system by using the Markov chains based modeling approach. The numerical analysis is reported and the simulation experiment is conducted to validate the numerical results.

    Download full text (pdf)
    fulltext
  • 65.
    Gurram, Karthik
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Chappidi, Maheshwar Reddy
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    A Search-Based Approach for Robustness Testing of Web Applications2019Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Context: This thesis deals with the robustness testing of web applications on a different web browser using a Selenium WebDriver to automate the browser. To increase the efficiency of this automation testing, we are using a robustness method. Robustness method is a process of testing the behaviour of a system implementation under exceptional execution conditions to check if it still fulfils some robustness requirements. These robustness tests often apply random algorithms to select the actions to be executed on web applications. The search-based technique was used to automatically generate effective test cases, consisting of initial conditions and fault sequences. The success criteria in most cases: "if it does not crash or hang application, then it is robust".

    Problem: Software testing consumes a lot of time, labour-intensive to write test cases and expensive in a software development life cycle. There was always a need for software testing to decrease the testing time. Manual testing requires a lot of effort and hard work if we measure in terms of person per month [1]. To overcome this problem, we are using a search-based approach for robustness testing of web applications which can dramatically reduce the human effort, time and the costs related to testing.

    Objective: The purpose of this thesis is to develop an automated approach to carry out robustness testing of web applications focusing on revealing defects related to a sequence of events triggered by a web system. To do so, we will employ search-based techniques (e.g., NSGA-II algorithm [1]). The main focus is on Ericsson Digital BSS systems, with a special focus on robustness testing. The main purpose of this master thesis is to investigate how automated robustness testing can be done so that the effort of keeping the tests up to date is minimized when the functionality of the application changes. This kind of automation testing is well depended on the structure of the product being tested. In this thesis, the test object was structured in a way, which made the testing method simple for fault revelation and less time-consuming.

    Method: For this approach, a meta-heuristic search-based genetic algorithm is used to make efficiency for robustness testing of the web application. In order to evaluate the effectiveness of this proposed approach, the experimental procedure is adapted. For this, an experimental testbed is set up. The effectiveness of the proposed approach is measured by two objectives: Fault revelation, Test sequence length. The effectiveness is also measured by evaluating the feasible cost-effective output test cases. i Results:The results we collected from our approach shows that by reducing the test sequence length we can reduce the time consuming and by using the NSGA-2 algorithm we found as many faults as we can when we tested on web applications in Ericsson.

    Conclusion: The attempt of testing of web applications, was partly succeeded. This kind of robustness testing in our approach was strongly depended on the algorithm we are using. We can conclude that by using these two objectives, we can reduce the cost of testing and time consuming.

    Download full text (pdf)
    fulltext
  • 66.
    Gustafsson, Jacob
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Törnkvist, Adam
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Secure handling of encryption keys for small businesses: A comparative study of key management systems2019Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Background: A recent study shows that key management in the cooperate world is very painful due to, among other reasons, a lack of knowledge and resources. Instead, some companies embed the encryption keys and other software secrets directly in the source code for the application that uses them, introducing the risk of exposing the secrets. Today, there are multiple systems for managing keys. However, it can be hard to pick a suitable one.

    Objectives: The objectives of the thesis are to identify available key management systems for securing secrets in software, evaluate their eligibility to be used by small businesses based on various attributes and recommend a best practice to configure the most suited system for managing software secrets.

    Methods: Key management systems are identified through an extensive search, using both scientific and non-scientific search engines. Identified key management systems were compared against a set of requirements created from a small business perspective. The systems that fulfilled the requirements were implemented and comprehensively evaluated through SWOT analyses based on various attributes. Each system was then scored and compared against each other based on these attributes. Lastly, a best practice guide for the most suitable key management system was established.

    Results: During the thesis, a total of 54 key management systems were identified with various features and purposes. Out of these 54 systems, five key management systems were comprehensively compared. These were Pinterest Knox, Hashicorp Vault, Square Keywhiz, OpenStack Barbican, and Cyberark Conjur. Out of these five, Hachicorp Vault was deemed to be the most suitable system for small businesses.

    Conclusions: There is currently a broad selection of key management systems available. The quality, price, and intended use of these vary, which makes it time-consuming to identify the system that is best suitable based on the needs. The thesis concludes Hachicorp Vault to be the most suitable system based on the needs presented. However, the thesis can also be used by businesses with other needs as a guideline to aid the problem of choosing a key management system.

    Download full text (pdf)
    fulltext
  • 67.
    Heiding, John
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Increasing Phenotype Diversity In Terrain Generation Using Fourier Transform: Implementation of Fourier transform as an intermediate phenotype for genetic algorithms2019Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Context. Creating resources for games and 3D environments is an effort consuming process. Some are looking to procedural algorithms to aid in this endeavour but the effort to configure the algorithms can be time consuming in itself. This paper will continue from a set of papers written by Frade et al. where they surrender the process of configuration to the algorithm by using genetic optimization together with a set of fitness functions. This is then tested on procedural generation of height maps.Objectives. The original algorithm utilizes a tree of functions that generates height maps using genetic optimization and a set of fitness functions. The output of the original algorithm is highly dependent on a specic noise function.This paper will investigate if the inverse Fourier transform can be used as an intermediate phenotype in order to decrease the relationship between the set of functions in the algorithm and the types of output.Methods. A reference implementation was first produced and verified. The Fourier transform was then added to the algorithm as an intermediate phenotype together with improvements on the original algorithm. The new algorithm was then put to the test via five experiments, where the output was compared with the reference implementation using manual review.Results. The implementation of Fourier transform that was attempted in this paper exclusively produced noisy output.Conclusions. The modified algorithm did not produce viable output. This most likely due to the behaviour of the Fourier transform in itself and in relation to the implementation of fitness calculation.

    Download full text (pdf)
    Increasing Phenotype Diversity
  • 68.
    Henesey, Lawrence
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Lizneva, Yulia
    Student.
    Anwar, Mahwish
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    A multi-agent system with blockchain for container stacking and dispatching.2019In: 21st International Conference on Harbor, Maritime and Multimodal Logistics Modeling and Simulation, HMS 2019, Dime University of Genoa , 2019, p. 79-87Conference paper (Refereed)
    Abstract [en]

    Port Logistical Supply chains play a very important role in society. Their complex and adaptive behaviours promote the suggested applications of combining a multiagent system with blockchain for solving complex problems. Several technologies have been proven positively to work in logistics, however the concept of combining converging technologies such as blockchain with deep reinforcement multi agent is viewed as a novel approach to solving the complexity that is associated with many facets of logistics. A simulator was developed and tested for the problem of container stacking. The simulation results indicate a more robust approach to currently used tools and methods. © Harbor, Maritime and Multimodal Logistics Modeling and Simulation, HMS 2019.All Rights Reserved.

  • 69.
    Iqbal, Muhammad Imran
    et al.
    Axis Communications AB, SWE.
    Zepernick, Hans-Juergen
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Error sensitivity analysis of DMB transport streams2019In: IEEE Access, E-ISSN 2169-3536, Vol. 7, p. 154424-154434, article id 8876649Article in journal (Refereed)
    Abstract [en]

    In this paper, we examine the sensitivity of the digital multimedia broadcasting (DMB) MPEG-2 transport stream (TS) format to transmission errors. To find the sensitivity of different parts of TS packets to transmission errors, each TS packet is divided into four cells, i.e., the first three cells comprising 48 bytes each and the last cell is of 44 bytes length. Bit errors are then introduced into these different parts of the TS packets. The sensitivity of DMB videos to transmission errors and their locations is assessed in terms of the following measures: 1) Number of decoder crashes; 2) Number of decodable videos; 3) Total number of decodable frames; and 4) Objective perceptual video quality of the decoded videos. The structural similarity index and visual information fidelity criterion are used as objective perceptual quality metrics. Simulations are performed on seven different DMB videos using various bit error rates. The results show that the first cell of the TS packets is highly sensitive to bit errors compared to the subsequent three cells, both in terms of spatial and temporal video quality. Further, the sensitivity decreases from Cell 1 to Cell 4 of a DMB TS packet. The error sensitivity analysis reported in this paper may guide the development of more reliable transmission systems for future DMB systems and services. Specifically, the insights gained from this study may support designing better error control schemes that take the sensitivity of different parts of DMB TS packets into consideration.

  • 70.
    Isenstierna, Tobias
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Popovic, Stefan
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Computer systems in airborne radar: Virtualization and load balancing of nodes2019Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Introduction. For hardware used in radar systems of today, technology is evolving in an increasing rate. For existing software in radar systems, relying on specific drivers or hardware, this quickly becomes a problem. When hardware required is no longer produced or outdated, compatibility problems emerges between the new hardware and existing software. This research will focus on exploring if the virtualization technology can be helpful in solving this problem. Would it be possible to address the compatibility problem with the help of hypervisor solutions, while also maintaining high performance?

    Objectives. The aim with this research is to explore the virtualization technology with focus on hypervisors, to improve the way that hardware and software cooperate within a radar system. The research will investigate if it is possible to solve compatibility problems between new hardware and already existing software, while also analysing the performance of virtual solutions compared to non-virtualized.

    Methods. The proposed method is an experiment were the two hypervisors Xen and KVM will analysed. The hypervisors will be running on two different systems. A native environment with similarities to a radar system will be built and then compared with the same system, but now with hypervisor solutions applied. Research around the area of virtualization will be conducted with focus on security, hypervisor features and compatibility.

    Results. The results will present a proposed virtual environment setup with the hypervisors installed. To address the compatibility issue, an old operating system has been used to prove that implemented virtualization works. Finally performance results are presented for the native environment compared against a virtual environment.

    Conclusions. From results gathered with benchmarks, we can see that the individual performance might vary, which is to be expected when used on different hardware. A virtual setup has been built, including Xen and KVM hypervisors, together with NAS communication. Running an old operating system as a virtual guest, compatibility has been proven to exist between software and hardware using KVM as the virtual solution. From the results gathered, KVM seems like a good solution to investigate more.

    Download full text (pdf)
    fulltext
  • 71.
    Jerčić, Petar
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Arousal Measurement Reflected in the Pupil Diameter for a Decision-Making Performance in Serious Games2019In: Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), Springer , 2019, Vol. 11863, p. 287-298Conference paper (Refereed)
    Abstract [en]

    This paper sets out to investigate the potentials of using pupil diameter measure as a contactless biofeedback method. The investigation was performed on how the interdependent and competing activation of the autonomic nervous system is reflected in the pupil diameter and how it affects the performance on decision-making task in serious games. The on-line biofeedback based on physiological measurements of arousal was integrated into the serious game set in the financial context. The pupil diameter was validated against the heart rate data measuring arousal, where the effects of such arousal were investigated. It was found that the physiological arousal was observable on both the heart and pupil data. Furthermore, the participants with lower arousal took less time to reach their decisions, and those decisions were more successful, in comparison to the participants with higher arousal. Moreover, such participants were able to get a higher total score and finish the game. This study validated the potential usage of pupil diameter as an unobtrusive measure of biofeedback, which would be beneficial for the investigation of arousal on human decision-making inside of serious games. © IFIP International Federation for Information Processing, 2019.

  • 72.
    Jerčić, Petar
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    What could the baseline measurements predict about decision-making performance in serious games set in the financial context2019In: 2019 11th International Conference on Virtual Worlds and Games for Serious Applications, VS-Games 2019 - Proceedings, Institute of Electrical and Electronics Engineers Inc. , 2019Conference paper (Refereed)
    Abstract [en]

    This paper sets out to investigate how the basal activation of the parasympathetic and sympathetic nervous systems may affect and predict the decision-making performance of players in serious games. In order to investigate the basal activation of both branches of the autonomic nervous system, pupil diameter and heart rate were recorded during baseline and analyzed in regards to performance scores in the serious game. It was found that the balance between the parasympathetic and sympathetic activation was responsible for beneficial decision-making performance, while lower sympathetic activation was found to be associated with the higher level reached in the game. It is suggested that the balance between the basal activation of both branches of the autonomic nervous system recorded during the baseline may predict the decision-making performance of players on the subsequent tasks in serious games. © 2019 IEEE.

  • 73.
    Jerčić, Petar
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Hagelbäck, Johan
    Linnéuniversitetet, SWE.
    Lindley, Craig
    Computational Modelling Group, Data61, CSIRO, AUS.
    An affective serious game for collaboration between humans and robots2019In: Entertainment Computing, ISSN 1875-9521, E-ISSN 1875-953X, Vol. 32, article id 100319Article in journal (Refereed)
    Abstract [en]

    Elicited physiological affect in humans collaborating with their robot partners was investigated to determine its influence on decision-making performance in serious games. A turn-taking version of the Tower of Hanoi game was used, where physiological arousal and valence underlying such human-robot proximate collaboration were investigated. A comparable decision performance in the serious game was found between human and non-humanoid robot arm collaborator conditions, while higher physiological affect was found in humans collaborating with such robot collaborators. It is suggested that serious games which are carefully designed to take into consideration the elicited physiological arousal might witness a better decision-making performance and more positive valence using non-humanoid robot partners instead of human ones. © 2019 The Authors

    Download full text (pdf)
    fulltext
  • 74.
    Jiahui, Yu
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Research on collaborative filtering algorithm based on knowledge graph and long tail2019Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Background: With the popularization of the Internet and the development of information technology, the network information data has shown an explosive growth, and the problem of information overload [1] has been highlighted. In order to help users, find the information they are interested in from a large amount of information, and help information producers to let their own information be concerned by the majority of users, the recommendation system came into being.   Objectives: However, the sparseness problem, the neglect of semantic information, and the failure to consider the coverage rate faced by the traditional recommendation system limit the effect of the recommendation system to some extent. So in this paper I want to deal with these problems. Methods: This paper improves the performance of the recommendation system by constructing a knowledge graph in the domain and using knowledge embedding technology (openKE), combined with the collaborative filtering algorithm based on the long tail theory. And I use 3 experiments to verify this proposed approach’s performance of recommendation and the ability to dig the long tail information, I compared it with some other collaborative filtering algorithms.  Results: The results show that the proposed approach improves the precision, recall and coverage and has a better ability to mine the long tail information. Conclusion: The proposed method improves the recommended performance by reducing the sparsity of the matrix and mining the semantic information between the items. At the same time, the long tail theory is considered, so that users can be recommended to more items that may be of interest. 

    Download full text (pdf)
    Research on collaborative
  • 75.
    Josyula, Sai Prashanth
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Parallel algorithms for real-time railway rescheduling2019Licentiate thesis, comprehensive summary (Other academic)
    Abstract [en]

    In railway traffic systems, it is essential to achieve a high punctuality to satisfy the goals of the involved stakeholders. Thus, whenever disturbances occur, it is important to effectively reschedule trains while considering the perspectives of various stakeholders. The train rescheduling problem is a complex task to solve, both from a practical and a computational perspective. From the latter perspective, a reason for the complexity is that the rescheduling solution(s) of interest may be dispersed across a large solution space. This space needs to be navigated fast while avoiding portions leading to undesirable solutions and exploring portions leading to potentially desirable solutions. The use of parallel computing enables such a fast navigation of the search tree. Though competitive algorithmic approaches for train rescheduling are a widespread topic of research, limited research has been conducted to explore the opportunities and challenges in parallelizing them.

    This thesis presents research studies on how trains can be effectively rescheduled while considering the perspectives of passengers along with that of other stakeholders. Parallel computing is employed, with the aim of advancing knowledge about parallel algorithms for solving the problem under consideration.

    The presented research contributes with parallel algorithms that reschedule a train timetable during disturbances and studies the incorporation of passenger perspectives during rescheduling. Results show that the use of parallel algorithms for train rescheduling improves the speed of solution space navigation and the quality of the obtained solution(s) within the computational time limit.

    This thesis consists of an introduction and overview of the work, followed by four research papers which present: (1) A literature review of studies that propose and apply computational support for train rescheduling with a passenger-oriented objective; (2) A parallel heuristic algorithm to solve the train rescheduling problem on a multi-core parallel architecture; (3) A conflict detection module for train rescheduling, which performs its computations on a graphics processing unit; and (4) A redesigned parallel algorithm that considers multiple objectives while rescheduling.

    Download full text (pdf)
    fulltext
  • 76.
    Josyula, Sai Prashanth
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Törnquist Krasemann, Johanna
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Lundberg, Lars
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    A parallel algorithm for multi-objective train reschedulingManuscript (preprint) (Other academic)
  • 77.
    Josyula, Sai Prashanth
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Törnquist Krasemann, Johanna
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Lundberg, Lars
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Exploring the Potential of GPU Computing in Train Rescheduling2019In: Proceedings of the 8th International Conference on Railway Operations Modelling and Analysis, Norrköping, 2019., 2019Conference paper (Refereed)
    Download full text (pdf)
    fulltext
  • 78.
    Kabra, Amit
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Clustering of Driver Data based on Driving Patterns2019Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Data analysis methods are important to analyze the ever-growing enormous quantity of the high dimensional data. Cluster analysis separates or partitions the data into disjoint groups such that data in the same group are similar while data between groups are dissimilar. The focus of this thesis study is to identify natural groups or clusters of drivers using the data which is based on driving style. In finding such a group of drivers, evaluation of the combinations of dimensionality reduction and clustering algorithms is done. The dimensionality reduction algorithms used in this thesis are Principal Component Analysis (PCA) and t-distributed stochastic neighbour embedding (t-SNE). The clustering algorithms such as K-means Clustering and Hierarchical Clustering are selected after performing Literature Review. In this thesis, the evaluation of PCA with K-means, PCA with Hierarchical Clustering, t-SNE with K-means and t-SNE with Hierarchical Clustering is done. The evaluation was done on the Volvo Cars’ drivers dataset based on their driving styles. The dataset is normalized first and Markov Chain of driving styles is calculated. This Markov Chain dataset is of very high dimensions and hence dimensionality reduction algorithms are applied to reduce the dimensions. The reduced dimensions dataset is used as an input to selected clustering algorithms. The combinations of algorithms are evaluated using performance metrics like Silhouette Coefficient, Calinski-Harabasz Index and DaviesBouldin Index. Based on experiment and analysis, the combination of t-SNE and K-means algorithms is found to be the best in comparison to other combinations of algorithms in terms of all performance metrics and is chosen to cluster the drivers based on their driving styles.

    Download full text (pdf)
    BTH2019Kabra
  • 79.
    Karlsson, Emelia
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Lidmark, Joel
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Frequency and encryption usage, investigation of the wireless landscape.: A study of access points in Karlskrona2019Independent thesis Basic level (degree of Bachelor), 10 credits / 15 HE creditsStudent thesis
    Abstract [en]

    Background. Wireless connectivity is simple and convenient for the user. This is the reasons why it is predominantly used today for local networks at home. However the potential drawbacks facing this technology is unknown to many of its users. This study is aimed at examining some of these issues in the context of what is used today.Objectives. This study intends to research what types of security features and frequency settings are being used today. It also aims to evaluate what this means in the context of security and usability effecting the user.Methods. The approach of this study is to gather networks in different geographical areas. To do this a Raspberry Pi with an external antenna is used. When the data collection is completed, the networks are broken down into categories. Results. The results show significant frequency overlap on the most commonly used channels. There are vastly more overlap in areas with apartment buildings compared to other residential areas. The results also show that most networks are using secure encryption settings. Conclusions. Careful selection of channels is required to minimise interference, but methods for doing so is specific for each environment. Security wise there are no big concerns except when it comes to password selection.

    Download full text (pdf)
    BTH2019LidmarkKarlsson
  • 80.
    Klotins, Eriks
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Boeva, Veselka
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Wnuk, Krzysztof
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Unterkalmsteiner, Michael
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Gorschek, Tony
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    A collaborative method for identification and prioritization of data sources in MDREManuscript (preprint) (Other academic)
    Abstract [en]

    Requirements engineering (RE) literature acknowledges the importance of stakeholder identification early in the software engineering activities. However, literature overlooks the challenge of identifying and selecting the right stakeholders and the potential of using other inanimate requirements sources for RE activities for market-driven products.

    Market-driven products are influenced by a large number of stakeholders. Consulting all stakeholders directly is impractical, and companies utilize indirect data sources, e.g. documents and representatives of larger groups of stakeholders. However, without a systematic approach, companies often use easy to access or hard to ignore data sources for RE activities. As a consequence, companies waste resources on collecting irrelevant data or develop the product based on the input from a few sources, thus missing market opportunities.

    We propose a collaborative and structured method to support analysts in the identification and selection of the most relevant data sources for market-driven product engineering. The method consists of four steps and aims to build consensus between different perspectives in an organization and facilitates the identification of most relevant data sources. We demonstrate the use of the method with two industrial case studies.

    Our results show that the method can support market-driven requirements engineering in two ways: (1) by providing systematic steps to identify and prioritize data sources for RE, and (2) by highlighting and resolving discrepancies between different perspectives in an organization.

  • 81.
    Kohstall, Jan
    et al.
    acs Plus GmbH, DEU.
    Boeva, Veselka
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Lundberg, Lars
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Angelova, Milena
    Technical University Sofia, BGR.
    Ensembles of Cluster Validation Indices for Label Noise Filtering2020In: Studies in Computational Intelligence, Springer, 2020, 864, p. 71-98Chapter in book (Refereed)
    Abstract [en]

    Cluster validation measures are designed to find the partitioning that best fits the underlying data. In this study, we show that these measures can be used for identifying mislabeled instances or class outliers prior to training in supervised learning problems. We introduce an ensemble technique, entitled CVI-based Outlier Filtering, which identifies and eliminates mislabeled instances from the training set, and then builds a classification hypothesis from the set of remaining instances. Our approach assigns to each instance in the training set several cluster validation scores representing its potential of being a class outlier with respect to the clustering properties the used validation measures assess. In this respect, the proposed approach may be referred to a multi-criteria outlier filtering measure. In this work, we specifically study and evaluate valued-based ensembles of cluster validation indices. The added value of this approach in comparison to the logical and rank-based ensemble solutions are discussed and further demonstrated. © 2020, Springer Nature Switzerland AG.

  • 82.
    Kola, Ramya Sree
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Generation of synthetic plant images using deep learning architecture2019Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Background:

    Generative Adversarial Networks (Goodfellow et al., 2014) (GANs)are the current state of the art machine learning data generating systems. Designed with two neural networks in the initial architecture proposal, generator and discriminator. These neural networks compete in a zero-sum game technique, to generate data having realistic properties inseparable to that of original datasets. GANs have interesting applications in various domains like Image synthesis, 3D object generation in gaming industry, fake music generation(Dong et al.), text to image synthesis and many more. Despite having a widespread application domains, GANs are popular for image data synthesis. Various architectures have been developed for image synthesis evolving from fuzzy images of digits to photorealistic images.

    Objectives:

    In this research work, we study various literature on different GAN architectures. To understand significant works done essentially to improve the GAN architectures. The primary objective of this research work is synthesis of plant images using Style GAN (Karras, Laine and Aila, 2018) variant of GAN using style transfer. The research also focuses on identifying various machine learning performance evaluation metrics that can be used to measure Style GAN model for the generated image datasets.

    Methods:

    A mixed method approach is used in this research. We review various literature work on GANs and elaborate in detail how each GAN networks are designed and how they evolved over the base architecture. We then study the style GAN (Karras, Laine and Aila, 2018a) design details. We then study related literature works on GAN model performance evaluation and measure the quality of generated image datasets. We conduct an experiment to implement the Style based GAN on leaf dataset(Kumar et al., 2012) to generate leaf images that are similar to the ground truth. We describe in detail various steps in the experiment like data collection, preprocessing, training and configuration. Also, we evaluate the performance of Style GAN training model on the leaf dataset.

    Results:

    We present the results of literature review and the conducted experiment to address the research questions. We review and elaborate various GAN architecture and their key contributions. We also review numerous qualitative and quantitative evaluation metrics to measure the performance of a GAN architecture. We then present the generated synthetic data samples from the Style based GAN learning model at various training GPU hours and the latest synthetic data sample after training for around ~8 GPU days on leafsnap dataset (Kumar et al., 2012). The results we present have a decent quality to expand the dataset for most of the tested samples. We then visualize the model performance by tensorboard graphs and an overall computational graph for the learning model. We calculate the Fréchet Inception Distance score for our leaf Style GAN and is observed to be 26.4268 (the lower the better).

    Conclusion:

    We conclude the research work with an overall review of sections in the paper. The generated fake samples are much similar to the input ground truth and appear to be convincingly realistic for a human visual judgement. However, the calculated FID score to measure the performance of the leaf StyleGAN accumulates a large value compared to that of Style GANs original celebrity HD faces image data set. We attempted to analyze the reasons for this large score.

    Download full text (pdf)
    fulltext
  • 83.
    Kondepati, Divya Naga Krishna
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Mallidi, Satish Kumar Reddy
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Performance Testing and Assessment of Various Network-Based Applications2019Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Performance Testing is one of the crucial parts of any software cycle process. In today’s world, there is any number of network-based applications. Manual Testing and Automated Testing are the two important ways to test any type of application. For Manual Testing a mobile application known as BlekingeTrafiken is used. For Automated Testing, a web application known as Edmodo is used. Selenium is the automated tool included for automated testing. But, for each application, there are several users and because of that, there might be a decrease in performance of the application as an increase in the number of users. Performance of an application also depends on response times, mean, stability, speed, capacity, accuracy. The performance also depends on the device (memory consumption, battery, software variation) and Server/API (less no of calls) and depends on the network performance (jitters, packet loss, network speed). There are several tools for performance testing. By using these tools, we can get accurate performance results of each request. 

    In this thesis, we performed manual testing of a mobile application by increasing the number of users under similar network conditions, automated testing of a web application under various test cases and tested the performance of an iPad application (PLANETJAKTEN). It is a real-time gaming application used to learn mathematics for children. Apache JMeter is the tool used for performance testing. The interaction between the JMeter tool and the iPad is done through HTTP Proxy method. When any user starts using the application, we can measure the performance of each request sent by the user. Nagios is the tool used to monitor the various environments. Results show that for manual testing, the time taken for connecting to WI-FI is low compared to opening and using the application. For automated testing, it is found that the time taken to run each test case for the first time is high compared to the remaining trials. For performance testing, the experimental results show that the error percentage (the percentage of failed requests) is high for logging into the application compared to using the application. 

    Download full text (pdf)
    fulltext
  • 84.
    Korsbakke, Andreas
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Ringsell, Robin
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Promestra Security compared with other random number generators2019Independent thesis Basic level (degree of Bachelor), 10 credits / 15 HE creditsStudent thesis
    Abstract [en]

    Background. Being able to trust cryptographic algorithms is a crucial part of society today, because of all the information that is gathered by companies all over the world. With this thesis, we want to help both Promestra AB and potential future customers to evaluate if you can trust their random number generator.

    Objectives. The main objective for the study is to compare the random number generator in Promestra security with the help of the test suite made by the NationalInstitute of Standards and Technology. The comparison will be made with other random number generators such as Mersenne Twister, Blum-Blum-Schub and more.

    Methods. The selected method in this study was to gather a total of 100 million bits of each random number generator and use these in the National Institute ofStandards and Technology test suite for 100 tests to get a fair evaluation of the algorithms. The test suite provides a statistical summary which was then analyzed.

    Results. The results show how many iterations out of 100 that have passed and also the distribution between the results. The obtained results show that there are some random number generators that have been tested that clearly struggles in many of the tests. It also shows that half of the tested generators passed all of the tests.

    Conclusions. Promestra security and Blum-Blum-Schub is close to passing all the tests, but in the end, they cannot be considered to be the preferable random number generator. The five that passed and seem to have no clear limitations are:Random.org, Micali-Schnorr, Linear-Congruential, CryptGenRandom, and MersenneTwister.

    Download full text (pdf)
    BTH2019RingsellKorsbakke
  • 85.
    Krantz, Amandus
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Cluster-based Sample Selection for Document Image Binarization2019Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    The current state-of-the-art, in terms of performance, for solving document image binarization is training artificial neural networks on pre-labelled ground truth data. As such, it faces the same issues as other, more conventional, classification problems; requiring a large amount of training data. However, unlike those conventional classification problems, document image binarization involves having to either manually craft or estimate the binarized ground truth data, which can be error-prone and time-consuming. This is where sample selection, the act of selecting training samples based on some method or metric, might help. By reducing the size of the training dataset in such a way that the binarization performance is not impacted, the required time spent creating the ground truth is also reduced. This thesis proposes a cluster-based sample selection method, based on previous work, that uses image similarity metrics and the relative neighbourhood graph to reduce the underlying redundancy of the dataset. The method is implemented with different clustering methods and similarity metrics for comparison, with the best implementation being based on affinity propagation and the structural similarity index. This implementation manages to reduce the training dataset by 46\% while maintaining a performance that is equal to that of the complete dataset. The performance of this method is shown to not be significantly different from randomly selecting the same number of samples. However, due to limitations in the random method, such as unpredictable performance and uncertainty in how many samples to select, the use of sample selection in document image binarization still shows great promise.

    Download full text (pdf)
    fulltext
  • 86.
    Krantz, Amandus
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Westphal, Florian
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Cluster-based Sample Selection for Document Image Binarization2019In: 2019 INTERNATIONAL CONFERENCE ON DOCUMENT ANALYSIS AND RECOGNITION WORKSHOPS (ICDARW), VOL 5, IEEE , 2019, p. 47-52Conference paper (Refereed)
    Abstract [en]

    The current state-of-the-art, in terms of performance, for solving document image binarization is training artificial neural networks on pre-labelled ground truth data. As such, it faces the same issues as other, more conventional, classification problems; requiring a large amount of training data. However, unlike those conventional classification problems, document image binarization involves having to either manually craft or estimate the binarized ground truth data, which can be error-prone and time-consuming. This is where sample selection, the act of selecting training samples based on some method or metric, might help. By reducing the size of the training dataset in such a way that the binarization performance is not impacted, the required time spent creating the ground truth is also reduced. This paper proposes a cluster-based sample selection method that uses image similarity metrics and the relative neighbourhood graph to reduce the underlying redundancy of the dataset. The method, implemented with affinity propagation and the structural similarity index, reduces the training dataset on average by 49.57% while reducing the binarization performance only by 0.55%.

    Download full text (pdf)
    fulltext
  • 87.
    Kuzminykh, Ievgeniia
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Carlsson, Anders
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Yevdokymenko, Maryna
    Kharkiv National University of Radio Electronics, UKR.
    A performance evaluation of sensor nodes in the home automation system based on arduino2019In: 2019 IEEE International Scientific-Practical Conference: Problems of Infocommunications Science and Technology, PIC S and T 2019 - Proceedings, Institute of Electrical and Electronics Engineers Inc. , 2019, p. 511-516, article id 9061442Conference paper (Refereed)
    Abstract [en]

    In this paper, a home automation system model was constructed with purpose to investigate of the correlation between the performance of wireless communication, power consumption of the constrained IoT devices and security. The series of experiments were conducted using sensor nodes connected to Arduino microcontroller and an RF 433 MHz wireless communication module. Measurements of the execution time and power consumption of the Arduino during data transfer with different security levels, as well as the analysis of the experimental results, were performed. The results show that the lifetime of the IoT device is determined by the communication speed, sleep mode management and depends on encryption. The obtained results can be used to minimize the power consumption of the device and improve communication efficiency. The results show that applied security reduces the productivity and lifetime of the sensor node not significantly. © 2019 IEEE.

  • 88.
    Kuzminykh, Ievgeniia
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Carlsson, Anders
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Yevdokymenko, Maryna
    Kharkiv National University of Radio, UKR.
    Sokolov, V. Yu
    Borys Grinchenko Kyiv University, UKR.
    Investigation of the IoT Device Lifetime with Secure Data Transmission2019In: Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) / [ed] Galinina O.,Andreev S.,Koucheryavy Y.,Balandin S., Springer Verlag , 2019, Vol. 11660, p. 16-27Conference paper (Refereed)
    Abstract [en]

    This paper represents the approach for estimation of the lifetime of the IoT end devices. The novelty of this approach is in the taking into account not only the energy consumption for data transmission, but also for ensuring the security by using the encryption algorithms. The results of the study showed the effect of using data encryption during transmission on the device lifetime depending on the key length and the principles of the algorithm used. © 2019, Springer Nature Switzerland AG.

  • 89.
    Lang, Martin
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Secure Automotive Ethernet: Balancing Security and Safety in Time Sensitive Systems2019Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Background.As a result of the digital era, vehicles are being digitalised in arapid pace. Autonomous vehicles and powerful infotainment systems are justparts of what is evolving within the vehicles. These systems require more in-formation to be transferred within the vehicle networks. As a solution for this,Ethernet was suggested. However, Ethernet is a ’best effort’ protocol which cannot be considered reliable. To solve that issue, specific implementations weredone to create Automotive Ethernet. However, the out-of-the-box vulnerabil-ities from Ethernet persist and need to be mitigated in a way that is suitableto the automotive domain.

    Objectives.This thesis investigates the vulnerabilities of Ethernet out-of-the-box and identify which vulnerabilities cause the largest threat in regard tothe safety of human lives and property. When such vulnerabilities are iden-tified, possible mitigation methods using security measures are investigated.Once two security measures are selected, an experiment is conducted to see ifthose can manage the latency requirements.

    Methods.To achieve the goals of this thesis, literature studies were conductedto learn of any vulnerabilities and possible mitigation. Then, those results areused in an OMNeT++experiment making it possible to record latency in a sim-ple automotive topology and then add the selected security measures to get atotal latency. This latency must be less than 10 ms to be considered safe in cars.

    Results. In the simulation, the baseline communication is found to take1.14957±0.02053 ms. When adding a security measure latency, the total dura-tion is found. For Hash-based Message Authentication Code (HMAC)-SecureHash Algorithm (SHA)-512 the total duration is 1.192274 ms using the up-per confidence interval. Elliptic Curve Digital Signature Algorithm (ECDSA)- ED25519 has the total latency of 3.108424 ms using the upper confidenceinterval.

    Conclusions. According to the results, both HMAC-SHA-512 and ECDSA- ED25519 are valid choices to implement as a integrity and authenticity secu-rity measure. However, these results are based on a simulation and should beverified using physical hardware to ensure that these measures are valid.

    Download full text (pdf)
    fulltext
  • 90.
    Li, Bing
    et al.
    Kunming Univ, CHN.
    Cheng, Wei
    Kunming Univ, CHN.
    Bie, Yiming
    Harbin Inst Technol, CHN.
    Sun, Bin
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Capacity of Advance Right-Turn Motorized Vehicles at Signalized Intersections for Mixed Traffic Conditions2019In: Mathematical problems in engineering (Print), ISSN 1024-123X, E-ISSN 1563-5147, article id 3854604Article in journal (Refereed)
    Abstract [en]

    Right-turn motorized vehicles turn right using channelized islands, which are used to improve the capacity of intersections. For ease of description, these kinds of right-turn motorized vehicles are called advance right-turn motorized vehicles (ARTMVs) in this paper. The authors analyzed four aspects of traffic conflict involving ARTMVs with other forms of traffic flow. A capacity model of ARTMVs is presented here using shockwave theory and gap acceptance theory. The proposed capacity model was validated by comparison to the results of the observations based on data collected at a single intersection with channelized islands in Kunming, the Highway Capacity Manual (HCM) model and the VISSIM simulation model. To facilitate engineering applications, the relationship describing the capacity of the ARTMVs with reference to the distance between the conflict zone and the stop line and the relationship describing the capacity of the ARTMVs with reference to the effective red time of the nonmotorized vehicles moving in the same direction were analyzed. The authors compared these results to the capacity of no advance right-turn motorized vehicles (NARTMVs). The results show that the capacity of the ARTMVs is more sensitive to the changes in the arrival rate of nonmotorized vehicles when the arrival rate of the nonmotorized vehicles is 500(veh/h)similar to 2000(veh/h) than when the arrival rate is some other value. In addition, the capacity of NARTMVs is greater than the capacity of ARTMVs when the nonmotorized vehicles have a higher arrival rate.

  • 91.
    Linné, Andreas
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Evaluating the Impact of V-Ray Rendering Engine Settings on Perceived Visual Quality and Render Time: A Perceptual Study2019Independent thesis Basic level (degree of Bachelor), 10 credits / 15 HE creditsStudent thesis
    Abstract [en]

    Background. In computer graphics, it can be a time-consuming process to render photorealistic images. This rendering process, called “physically based rendering” uses complex algorithms to calculate the behavior of light. Fortunately, most renderers offer the possibility to alter the render-settings, allowing for a decrease in render time, but this usually comes at the cost of a lower quality image.

    Objectives. This study aims to identify what setting has the highest impact on the rendering process in the V-Ray renderer. It also examines if a perceived difference can be seen when reducing this setting.

    Methods. To achieve this, an experiment was done where 22 participants would indicate their preference for rendered images. The images were rendered in V-Ray with different settings, which affected their respective render time differently. Additionally, an objective image metric was used to analyze the images and try to form a correlation with the subjective results.

    Results. The results show that the anti-aliasing setting had the highest impact on render time as well as user preference. It was found that participants preferred images with at least 25% to 50% anti-aliasing depending on the scene. The objective results also coincided well enough with the subjective results that it could be used as a faster analytical tool to measure the quality of a computer-generated image. Prior knowledge of rendering was also taken into account but did not give conclusive results about user preferences.

    Conclusions. From the results it can be concluded that anti-aliasing is the most important setting for achieving good subjective image quality in V-Ray. Additionally, the use of an objective image assessment tool can drastically speed up the process for targeting a specific visual quality goal.

    Download full text (pdf)
    Evaluating the Impact of V-Ray
  • 92.
    LIU, YUAN
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Machine Learning Approach to Forecasting Empty Container Volumes2019Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]
    • Background With the development of global trade, the volume of goods transported around the world is growing. And over 90% of world trade is carried by shipping industry, container shipping is the most important way. But with the growth of trade imbalances, the reposition of empty containers has become an important issue for shipping. Accurately predicting the volume of empty containers will greatly assist the empty container reposition plan.
    • Objectives The main aim of this study is to explore the effect of machine learning in predicting empty container volumes, make a performance comparison and analysis with existing empirical methods and mathematical statistics methods.
    • Methods The main method of this study is experiment. In this study I chose the appropriate algorithm model and then trained and tested the model. This study uses the same data sources as the industrial approach, using the same metric to evaluate and compare the performance of machine learning methods and industrial methods. 
    • Results Through experiments, this study obtained the forecasting performance results of five machine algorithms including the LASSO regression algorithm on the Los Angeles Port and Long Beach Port datasets. Metrics are (Mean Square Error) MSE and (Mean Absolute Error) MAE.
    • Conclusions LASSO Regression and Ridge Regression are the best machine learning algorithms for predicting the volume of empty containers. Compared to empirical methods, the single machine learning algorithm performs better and has better accuracy. However, compared with mature statistical methods such as time series, the performance of a single machine learning algorithm is worse than the time series method. Machine learning needs to try to combine multiple models or select more high-correlation feature quantities to improve performance on this prediction problem. 
    Download full text (pdf)
    Machine Learning Approach
  • 93.
    Lundberg, Lars
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Lennerstad, Håkan
    Blekinge Institute of Technology, Faculty of Engineering, Department of Mathematics and Natural Sciences.
    Boeva, Veselka
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    García Martín, Eva
    Handling non-linear relations in support vector machines through hyperplane folding2019In: ACM International Conference Proceeding Series, Association for Computing Machinery , 2019, p. 137-141Conference paper (Refereed)
    Abstract [en]

    We present a new method, called hyperplane folding, that increases the margin in Support Vector Machines (SVMs). Based on the location of the support vectors, the method splits the dataset into two parts, rotates one part of the dataset and then merges the two parts again. This procedure increases the margin as long as the margin is smaller than half of the shortest distance between any pair of data points from the two different classes. We provide an algorithm for the general case with n-dimensional data points. A small experiment with three folding iterations on 3-dimensional data points with non-linear relations shows that the margin does indeed increase and that the accuracy improves with a larger margin. The method can use any standard SVM implementation plus some basic manipulation of the data points, i.e., splitting, rotating and merging. Hyperplane folding also increases the interpretability of the data. © 2019 Association for Computing Machinery.

  • 94.
    Luro, Francisco Lopez
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Navarro, Diego
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Sundstedt, Veronica
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Ethical considerations for the use of virtual reality: An evaluation of practices in academia and industry2017In: Proceedings of the 27th International Conference on Artificial Reality and Telexistence and 22nd Eurographics Symposium on Virtual Environments, ACM Digital Library, 2017, p. 141-148Conference paper (Refereed)
    Abstract [en]

    The following article offers a set of recommendations that are considered relevant for designing and executing experiences with Virtual Reality (VR) technology. It presents a brief review of the history and evolution of VR, along with the physiological issues related to its use. Additionally, typical practices in VR, used by both academia and industry are discussed and contrasted. These were further analysed from an ethical perspective, guided by legal and Corporate Social Responsibility (CSR) frameworks, to understand their motivation and goals, and the rights and responsibilities related to the exposure of research participants and final consumers to VR. Our results showed that there is a significant disparity between practices in academia and industry, and for industry specifically, there can be breaches of user protection regulations and poor ethical practices. The differences found are mainly in regards to the type of content presented, the overall setup of VR experiences, and the amount of information provided to participants or consumers respectively. To contribute to this issue, this study highlights some ethical aspects and also offers practical considerations that aim, not only to have more appropriate practices with VR in public spaces but also to motivate a discussion and reflection to ease the adoption of this technology in the consumer market.

  • 95.
    Luro, Francisco Lopez
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Sundstedt, Veronica
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    A comparative study of eye tracking and hand controller for aiming tasks in virtual reality2019In: Eye Tracking Research and Applications Symposium (ETRA), Association for Computing Machinery , 2019Conference paper (Refereed)
    Abstract [en]

    Aiming is key for virtual reality (VR) interaction, and it is often done using VR controllers. Recent eye-tracking integrations in commercial VR head-mounted displays (HMDs) call for further research on usability and performance aspects to better determine possibilities and limitations. This paper presents a user study exploring gaze aiming in VR compared to a traditional controller in an “aim and shoot” task. Different speeds of targets and trajectories were studied. Qualitative data was gathered using the system usability scale (SUS) and cognitive load (NASA TLX) questionnaires. Results show a lower perceived cognitive load using gaze aiming and on par usability scale. Gaze aiming produced on par task duration but lower accuracy on most conditions. Lastly, the trajectory of the target significantly affected the orientation of the HMD in relation to the target’s location. The results show potential using gaze aiming in VR and motivate further research. © 2019 Copyright held by the owner/author(s). Publication rights licensed to ACM.

  • 96.
    Maksimov, Yulian
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    AI Competition: Textual Tree2019Other (Refereed)
    Download full text (pdf)
    AI Competition: Textual Tree
  • 97.
    Maksimov, Yulian
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Consortium Project: Textual Tree2019Other (Refereed)
    Download full text (pdf)
    Consortium Project - Textual Tree-v3
  • 98.
    Maksimov, Yuliyan
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science. FHNW University of Applied Sciences and Arts Northwestern Switzerland, CHE.
    Fricker, Samuel
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Tutschku, Kurt
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Artifact Compatibility for Enabling Collaboration in the Artificial Intelligence Ecosystem2018In: Lecture Notes in Business Information Processing, Springer, 2018, Vol. 336, p. 56-71Conference paper (Refereed)
    Abstract [en]

    Different types of software components and data have to be combined to solve an artificial intelligence challenge. An emerging marketplace for these components will allow for their exchange and distribution. To facilitate and boost the collaboration on the marketplace a solution for finding compatible artifacts is needed. We propose a concept to define compatibility on such a marketplace and suggest appropriate scenarios on how users can interact with it to support the different types of required compatibility. We also propose an initial architecture that derives from and implements the compatibility principles and makes the scenarios feasible. We matured our concept in focus group workshops and interviews with potential marketplace users from industry and academia. The results demonstrate the applicability of the concept in a real-world scenario.

    Download full text (pdf)
    fulltext
  • 99.
    Maksimov, Yuliyan V.
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Fricker, Samuel
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Framework for Analysis of Multi-Party Collaboration2019In: Proceedings - 2019 IEEE 27th International Requirements Engineering Conference Workshops, REW 2019, IEEE Computer Society Digital Library, 2019, p. 44-53, article id 8933635Conference paper (Refereed)
    Abstract [en]

    In recent years, platforms have become important for allowing ecosystems to emerge that allow users to collaborate and create unprecedented forms of innovation. For the platform provider, the ecosystem represents a massive business opportunity if the platform succeeds to make the collaborations among the users value-creating and to facilitate trust. While the requirements flow for evolving existing ecosystems is understood, it is unclear how to analyse an ecosystem that is to be. In this paper, we draw on recent work on collaboration modelling in requirements engineering and propose an integrated framework for the analysis of multi-party collaboration that is to be supported by a platform. Drawing on a real-world case, we describe how the framework is applied and the results that have been obtained with it. The results indicate that the framework was useful to understand the ecosystem context for a planned platform in the domain of artificial intelligence, allowed identification of platform requirements and offered a basis to plan validation.

    Download full text (pdf)
    fulltext
  • 100.
    Manchala, Sai Srivatsava
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Machine Learning Techniques To Analyze Operator’s Behavior2020Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Background: With savvier management teams, airlines are becoming more stable, more productive, and more profitable. The problems plaguing the aviation industry, however, have not gone away and have become more complicated instead. Schedule recovery is the process of recovery from these issues (also known as operating disturbances). The recovery solver from Jeppesen is a software tool that produces a set of solutions to solve these operational disruptions.

    Objectives: In this research work, we review the literature related to disruptions in airlines to understand the state of the art of applying machine learning and decrease the recovery time. The primary goal of this research work is to analyze the Jeppesenairline system and recovery solver extensively, which plays an important role and is used when disturbances occur. In the case of a loss, the recovery solver provides several solutions. The operator can either solve it manually, use a solution created by the recovery solver, or use a combination to solve a disturbance. The research also focuses on identifying various machine learning algorithms that can be used to answer two questions: "Will the operator use the solver" and "If the operator uses the solver, which solution will he prefer"

    Methods: First, a literature review is performed to classify effective machine learning algorithms and then consider the findings of the discovery that an experiment is conducted to test the chosen machine learning algorithms. Due to unbalanced classes in the dataset, an experiment is performed to generate a synthetic dataset that is similar to the ground truth. Various steps that are done in the experimentation phase like data collection, preprocessing and training are described in detail. We also test the performance of various algorithms for machine learning.

    Results: The results are presented in conjunction with the literature review and the experiments performed to answer research questions. The performance of the models is then measured using different performance metrics.

    Conclusions: We finish the research work with an overall review of sections in the paper. It can be inferred that neural network models and the SVM model do not significantly improve predictive performance compared to the XGBoost model by evaluating the results obtained and considering the real-world scenario this study aims at.

    Download full text (pdf)
    Machine Learning Techniques ...
1234 51 - 100 of 165
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf