Change search
Refine search result
1234 1 - 50 of 151
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Rows per page
  • 5
  • 10
  • 20
  • 50
  • 100
  • 250
Sort
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
Select
The maximal number of hits you can export is 250. When you want to export more records please use the Create feeds function.
  • 1. Abghari, Shahrooz
    et al.
    Boeva, Veselka
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Brage, Jens
    NODA Intelligent Systems AB, SWE.
    Johansson, Christian
    NODA Intelligent Systems AB, SWE.
    Grahn, Håkan
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Lavesson, Niklas
    J önk öping University, SWE.
    Higher order mining for monitoring district heating substations2019In: Proceedings - 2019 IEEE International Conference on Data Science and Advanced Analytics, DSAA 2019, Institute of Electrical and Electronics Engineers Inc. , 2019, p. 382-391Conference paper (Refereed)
    Abstract [en]

    We propose a higher order mining (HOM) approach for modelling, monitoring and analyzing district heating (DH) substations' operational behaviour and performance. HOM is concerned with mining over patterns rather than primary or raw data. The proposed approach uses a combination of different data analysis techniques such as sequential pattern mining, clustering analysis, consensus clustering and minimum spanning tree (MST). Initially, a substation's operational behaviour is modeled by extracting weekly patterns and performing clustering analysis. The substation's performance is monitored by assessing its modeled behaviour for every two consecutive weeks. In case some significant difference is observed, further analysis is performed by integrating the built models into a consensus clustering and applying an MST for identifying deviating behaviours. The results of the study show that our method is robust for detecting deviating and sub-optimal behaviours of DH substations. In addition, the proposed method can facilitate domain experts in the interpretation and understanding of the substations' behaviour and performance by providing different data analysis and visualization techniques. © 2019 IEEE.

  • 2.
    Adamov, Alexander
    et al.
    NioGuard Security Lab, UKR.
    Carlsson, Anders
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Surmacz, Tomasz
    Wrocław University of Science and Technology, POL.
    An analysis of lockergoga ransomware2019In: 2019 IEEE East-West Design and Test Symposium, EWDTS 2019, Institute of Electrical and Electronics Engineers Inc. , 2019Conference paper (Refereed)
    Abstract [en]

    This paper contains an analysis of the LockerGoga ransomware that was used in the range of targeted cyberattacks in the first half of 2019 against Norsk Hydra-A world top 5 aluminum manufacturer, as well as the US chemical enterprises Hexion, and Momentive-Those companies are only the tip of the iceberg that reported the attack to the public. The ransomware was executed by attackers from inside a corporate network to encrypt the data on enterprise servers and, thus, taking down the information control systems. The intruders asked for a ransom to release a master key and decryption tool that can be used to decrypt the affected files. The purpose of the analysis is to find out tactics and techniques used by the LockerGoga ransomware during the cryptolocker attack as well as an encryption model to answer the question if the encrypted files can be decrypted with or without paying a ransom. The scientific novelty of the paper lies in an analysis methodology that is based on various reverse engineering techniques such as multi-process debugging and using open source code of a cryptographic library to find out a ransomware encryption model. © 2019 IEEE.

  • 3.
    Ahlgren, Filip
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Local And Network Ransomware Detection Comparison2019Independent thesis Basic level (degree of Bachelor), 10 credits / 15 HE creditsStudent thesis
    Abstract [en]

    Background. Ransomware is a malicious application encrypting important files on a victim's computer. The ransomware will ask the victim for a ransom to be paid through cryptocurrency. After the system is encrypted there is virtually no way to decrypt the files other than using the encryption key that is bought from the attacker.

    Objectives. In this practical experiment, we will examine how machine learning can be used to detect ransomware on a local and network level. The results will be compared to see which one has a better performance.

    Methods. Data is collected through malware and goodware databases and then analyzed in a virtual environment to extract system information and network logs. Different machine learning classifiers will be built from the extracted features in order to detect the ransomware. The classifiers will go through a performance evaluation and be compared with each other to find which one has the best performance.

    Results. According to the tests, local detection was both more accurate and stable than network detection. The local classifiers had an average accuracy of 96% while the best network classifier had an average accuracy of 89.6%.

    Conclusions. In this case the results show that local detection has better performance than network detection. However, this can be because the network features were not specific enough for a network classifier. The network performance could have been better if the ransomware samples consisted of fewer families so better features could have been selected.

  • 4.
    Ahmadi Mehri, Vida
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Towards Secure Collaborative AI Service Chains2019Licentiate thesis, comprehensive summary (Other academic)
    Abstract [en]

    At present, Artificial Intelligence (AI) systems have been adopted in many different domains such as healthcare, robotics, automotive, telecommunication systems, security, and finance for integrating intelligence in their services and applications. The intelligent personal assistant such as Siri and Alexa are examples of AI systems making an impact on our daily lives. Since many AI systems are data-driven systems, they require large volumes of data for training and validation, advanced algorithms, computing power and storage in their development process. Collaboration in the AI development process (AI engineering process) will reduce cost and time for the AI applications in the market. However, collaboration introduces the concern of privacy and piracy of intellectual properties, which can be caused by the actors who collaborate in the engineering process.  This work investigates the non-functional requirements, such as privacy and security, for enabling collaboration in AI service chains. It proposes an architectural design approach for collaborative AI engineering and explores the concept of the pipeline (service chain) for chaining AI functions. In order to enable controlled collaboration between AI artefacts in a pipeline, this work makes use of virtualisation technology to define and implement Virtual Premises (VPs), which act as protection wrappers for AI pipelines. A VP is a virtual policy enforcement point for a pipeline and requires access permission and authenticity for each element in a pipeline before the pipeline can be used.  Furthermore, the proposed architecture is evaluated in use-case approach that enables quick detection of design flaw during the initial stage of implementation. To evaluate the security level and compliance with security requirements, threat modeling was used to identify potential threats and vulnerabilities of the system and analyses their possible effects. The output of threat modeling was used to define countermeasure to threats related to unauthorised access and execution of AI artefacts.

  • 5.
    Andres, Bustamante
    et al.
    Tecnológico de Monterrey, MEX.
    Cheddad, Abbas
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Rodriguez-Garcia, Alejandro
    Tecnológico de Monterrey, MEX.
    Digital Image Processing and Development of Machine Learning Models for the Discrimination of Corneal Pathology: An Experimental Model2019Conference paper (Refereed)
  • 6.
    Angelova, Milena
    et al.
    Technical University of sofia, BUL.
    Vishnu Manasa, Devagiri
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Boeva, Veselka
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Linde, Peter
    Blekinge Institute of Technology, The Library.
    Lavesson, Niklas
    An Expertise Recommender System based on Data from an Institutional Repository (DiVA)2019In: Connecting the Knowledge Common from Projects to sustainable Infrastructure: The 22nd International conference on Electronic Publishing - Revised Selected Papers / [ed] Leslie Chan, Pierre Mounier, OpenEdition Press , 2019, p. 135-149Chapter in book (Refereed)
    Abstract [en]

    Finding experts in academics is an important practical problem, e.g. recruiting reviewersfor reviewing conference, journal or project submissions, partner matching for researchproposals, finding relevant M. Sc. or Ph. D. supervisors etc. In this work, we discuss anexpertise recommender system that is built on data extracted from the Blekinge Instituteof Technology (BTH) instance of the institutional repository system DiVA (DigitalScientific Archive). DiVA is a publication and archiving platform for research publicationsand student essays used by 46 publicly funded universities and authorities in Sweden andthe rest of the Nordic countries (www.diva-portal.org). The DiVA classification system isbased on the Swedish Higher Education Authority (UKÄ) and the Statistic Sweden's (SCB)three levels classification system. Using the classification terms associated with studentM. Sc. and B. Sc. theses published in the DiVA platform, we have developed a prototypesystem which can be used to identify and recommend subject thesis supervisors in academy.

  • 7.
    Anwar, Mahwish
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Connect2SmallPorts: An EU Project on Digitalizing Small Ports of the South Baltic Region. Poster ID: P20-206202020Other (Other (popular science, discussion, etc.))
    Abstract [en]

    Ports play pivotal role in the global supply chain network. To strengthen the ports’ business and to keep up with the overall economic development of the country, the port stakeholders have started to invest in the digital technologies. The ports as well as the individual municipalities in Europe are in contest with other ports within the region. One of the competing factors is that of the port’s technological development. Unlike large ports, the small ports within Europe, for example Port of Karlskrona, Port of Wismar or Port of Klaipeda, which also serve as crucial nodes within the trade flow for Sweden, Germany and Lithuania respectively, lack the knowledge and tools to leverage the potential of digital technologies. The digital disruption at ports is inevitable!

    With that being established the scope of the project - South Baltic Small Ports as Gateways towards Integrated Sustainable European Transport System and Blue Growth by Smart Connectivity Solutions – or Connect2SmallPorts project is to understand how to facilitate small and medium ports of the South Baltic region with the digital technologies - Blockchain and Internet of Things. During the project lifetime (2018 to 2021) the main goals are to perform the digital audit of small ports of the South Baltic region; to prepare an implementation strategy for Blockchain and Internet of Things specifically for the small ports in the South Baltic region and to conduct an evaluation of the proposed strategies. The project correspondingly aims to disseminate the knowledge and experiences through research publications, industrial conferences and international tradeshows.

  • 8.
    Anwar, Mahwish
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Connect2smallports Project: South Baltic Small Ports – Gateway to Integratedand Sustainable European Transport System: Project brief and updates on the project activities: Digital Audit. Blockchain Design Strategy. Call for Collaboration. Reports and scientific publications.2019Other (Other (popular science, discussion, etc.))
    Abstract [en]

    Ports play pivotal role in the global supply chain network. To strengthen the ports’ business and to keep up with the overall economic development of the country, the port stakeholders have started to invest in the digital technologies. The ports as well as the individual municipalities in Europe are in contest with other ports within the region. One of the competing factors is that of the port’s technological development. Unlike large ports, the small ports within Europe, for example Port of Karlskrona, Port of Wismar or Port of Klaipeda, which also serve as crucial nodes within the trade flow for Sweden, Germany and Lithuania respectively, lack the knowledge and tools to leverage the potential of digital technologies. The digital disruption at ports is inevitable!

    With that being established the scope of the project - South Baltic Small Ports as Gateways towards Integrated Sustainable European Transport System and Blue Growth by Smart Connectivity Solutions – or Connect2SmallPorts project is to understand how to facilitate small and medium ports of the South Baltic region with the digital technologies - Blockchain and Internet of Things. During the project lifetime (2018 to 2021) the main goals are to perform the digital audit of small ports of the South Baltic region; to prepare an implementation strategy for Blockchain and Internet of Things specifically for the small ports in the South Baltic region and to conduct an evaluation of the proposed strategies. The project correspondingly aims to disseminate the knowledge and experiences through research publications, industrial conferences and international tradeshows.

  • 9.
    Anwar, Mahwish
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Digitalization in Container Terminal Logistics: A Literature Review2019In: 27th Annual Conference of International Association of Maritime Economists (IAME), 2019, p. 1-25, article id 141Conference paper (Refereed)
    Abstract [en]

    Many terminals that are located in large ports, such as Port of Rotterdam, Port of Singapore, Port of Hamburg, etc. employ various emerging digital technologies to handle container and information. Some technologies deemed attractive by large ports are: Artificial Intelligence (AI), Cloud Computing, Blockchain and Internet of Things (IoT). The objective of this paper is to review the “state-of-the-art” of scientific literature on digital technologies that facilitate operations management for container terminal logistics. The studies are synthesized in form of a classification matrix and analysis performed. The primary studies consisted of 57 papers, out of the initial pool of over 2100 findings. Over 94% of the publications identified focused on AI; while 29% exploited IoT and Cloud Computing technologies combined. The research on Blockchain within the context of container terminal was nonexistent. Majority of the publications utilized numerical experiments and simulation for validation. A large amount of the scientific literature was dedicated to resource management and scheduling of intra-logistic equipment/vessels or berth or container storage in the yard. Results drawn from the literature survey indicate that various research gaps exist. A discussion and an analysis of review is presented, which could be of benefit for stakeholders of small-medium sized container terminals.

  • 10.
    Anwar, Mahwish
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Henesey, Lawrence
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Casalicchio, Emiliano
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    The feasibility of Blockchain solutions in the maritime industry2019Conference paper (Other academic)
    Abstract [en]

    Purpose / Value

    The concept of Blockchain technology in supply chain management is well discussed, yet

    inadequately theorized in terms of its applicability, especially within the maritime industry,

    which forms a fundamental node of the entire supply chain network. More so, the assumptive

    grounds associated with the technology have not been openly articulated, leading to unclear

    ideas about its applicability.

    Design/methodology/approach

    The research is designed divided into two Stages. This paper (Stage one) enhanced

    literature review for data collection in order to gauge the properties of the Blockchain

    technology, and to understand and map those characteristics with the Bill of Lading

    process within maritime industry. In Stage two an online questionnaire is conducted to

    assess the feasibility of Blockchain technology for different maritime use-cases.

    Findings

    The research that was collected and analysed partly from deliverable in the

    Connect2SmallPort Project and from other literature suggests that Blockchain can be an

    enabler for improving maritime supply chain. The use-case presented in this paper highlights

    the practicality of the technology. It was identified that Blockchain possess characteristics

    suitable to mitigate the risks and issues pertaining to the paper-based Bill of Lading process.

    Research limitations

    The study would mature further after the execution of the Stage Two. By the end of both

    Stages, a framework for Blockchain adoption with a focus on the maritime industry would

    be proposed.

    Practical implications

    The proposed outcome indicated the practicality of technology, which could be beneficial

    for the port stakeholders that wish to use Blockchain in processing Bill of Lading or

    contracts.

    Social implications

    The study may influence the decision makers to consider the benefits of using the Blockchain

    technology, thereby, creating opportunities for the maritime industry to leverage the

    technology with government’s support.

  • 11.
    Arredal, Martin
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Eye Tracking’s Impact on Player Performance and Experience in a 2D Space Shooter Video Game.2018Independent thesis Basic level (degree of Bachelor), 10 credits / 15 HE creditsStudent thesis
    Abstract [en]

    Background. Although a growing market, most of the commercially available gamestoday that features eye tracking support is rendered in a 3D perspective. Games ren-dered in 2D have seen little support for eye trackers from developers. By comparing the differences in player performance and experience between an eye tracker and acomputer mouse when playing a classic 2D genre: space shooter, this thesis aim tomake an argument for the implementation of eye tracking in 2D video games.

    Objectives. Create a 2D space shooter video game where movement will be handledthrough a keyboard but the input method for aiming will alter between a computermouse and an eye tracker.

    Methods. Using a Tobii EyeX eye tracker, an experiment was conducted with fif-teen participants. To measure their performance, three variables was used: accuracy,completion time and collisions. The participants played two modes of a 2D spaceshooter video game in a controlled environment. Depending on which mode wasplayed, the input method for aiming was either an eye tracker or a computer mouse.The movement was handled using a keyboard for both modes. When the modes hadbeen completed, a questionnaire was presented where the participants would ratetheir experience playing the game with each input method.

    Results. The computer mouse had a better performance in two out of three per-formance variables. On average the computer mouse had a better accuracy andcompletion time but more collisions. However, the data gathered from the question-naire shows that the participants had on average a better experience when playingwith an eye tracker

    Conclusions. The results from the experiment shows a better performance for par-ticipants using the computer mouse, but participants felt more immersed with the eyetracker and giving it a better score on all experience categories. With these results,this study hope to encourage developers to implement eye tracking as an interactionmethod for 2D video games. However, future work is necessary to determine if theexperience and performance increase or decrease as the playtime gets longer.

  • 12.
    Avdic, Adnan
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Ekholm, Albin
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Anomaly Detection in an e-Transaction System using Data Driven Machine Learning Models: An unsupervised learning approach in time-series data2019Independent thesis Basic level (degree of Bachelor), 10 credits / 15 HE creditsStudent thesis
    Abstract [en]

    Background: Detecting anomalies in time-series data is a task that can be done with the help of data driven machine learning models. This thesis will investigate if, and how well, different machine learning models, with an unsupervised approach,can detect anomalies in the e-Transaction system Ericsson Wallet Platform. The anomalies in our domain context is delays on the system.

    Objectives: The objectives of this thesis work is to compare four different machine learning models ,in order to find the most relevant model. The best performing models are decided by the evaluation metric F1-score. An intersection of the best models are also being evaluated in order to decrease the number of False positives in order to make the model more precise.

    Methods: Investigating a relevant time-series data sample with 10-minutes interval data points from the Ericsson Wallet Platform was used. A number of steps were taken such as, handling data, pre-processing, normalization, training and evaluation.Two relevant features was trained separately as one-dimensional data sets. The two features that are relevant when finding delays in the system which was used in this thesis is the Mean wait (ms) and the feature Mean * N were the N is equal to the Number of calls to the system. The evaluation metrics that was used are True positives, True Negatives, False positives, False Negatives, Accuracy, Precision, Recall, F1-score and Jaccard index. The Jaccard index is a metric which will reveal how similar each algorithm are at their detection. Since the detection are binary, it’s classifying the each data point in the time-series data.

    Results: The results reveals the two best performing models regards to the F1-score.The intersection evaluation reveals if and how well a combination of the two best performing models can reduce the number of False positives.

    Conclusions: The conclusion to this work is that some algorithms perform better than others. It is a proof of concept that such classification algorithms can separate normal from non-normal behavior in the domain of the Ericsson Wallet Platform.

  • 13.
    Bandari Swamy Devender, Vamshi Krishna
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Adike, Sneha
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Design and Performance of an Event Handling and Analysis Platform for vSGSN-MME event using the ELK stack2019Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Data Logging is the main activity to be considered in maintaining a server or database in working condition without any errors or failures. Data collection can be automatic, so, no human presence is necessary. To store the data of logs for many days and visualizing became a huge problem in recent days. Coming to node SGSN-MME, which is the main component of the GPRS network, which handles all packet switched data within the mobile operator's network. A lot of log data is generated and stored in file systems on the redundant File Server Boards in SGSN-MME node. The evolution of the SGSN-MME is taking it from dedicated, purpose-built, hardware into virtual machines in the Cloud, where virtual file server boards fit very badly. The purpose of this thesis is to give a better way to store the log data and add visualization using the ELK stack concept. Fetching useful information from logs is one of the most important part of this stack and is being done in Logstash using its grok filters and a set of input, filter and output plug-ins which helps to scale this functionality for taking various kinds of inputs ( file,TCP, UDP, gemfire, stdin, UNIX, web sockets and even IRC and twitter and many more) , filter them using (groks,grep,date filters etc.)and finally write output to ElasticSearch. The Research Methodology involved in carrying out this thesis work is a Qualitative approach. A study is carried using the ELK concept with respect to Legacy approach in Ericsson company. A suitable approach and the better possible solution is given to the vSGSN-MME node to store the log data. Also to provide the performance and uses multiple users of input providers and provides the analysis of the graphs from the results and analysis. To perform the tests accurately, readings are taken in defined failure scenarios. From the test cases, a plot is provided on the CPU load in vSGSN-MME which easily gives the suitable and best promising way.

  • 14.
    Bengtsson, Daniel
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Jursenaite, Giedre
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    A user study to analyse the experience of augmented reality board games2019Independent thesis Basic level (degree of Bachelor), 10 credits / 15 HE creditsStudent thesis
    Abstract [en]

    Background. Augmented Reality (AR) is a variant of virtual reality (VR), but where VR replaces reality with a virtual one, AR expands it allowing the user to see the real world and virtual information at the same time. Many have tried to adapt this technology for video and board games and although there are plenty of AR video games or mobile game applications there are no one selling AR board games. Some studies keep coming up from time to time trying to enhance board games with AR through graphics and extra information about player statistics, but there are not many that adapt game logic in AR games.Objectives. A literature review was conducted on related topics building the theoretical background. Then a multiplayer board game that could be played both with and without AR was created. The game was created in the Unity Engine using the Vuforia Engine for the AR, and assets were created for the AR. Design game logic with Unity and player interaction with AR. Create the analog assets. Conduct a user study for participants to rate the experience and analyse the gathered data from the user study.Methods. A user study was conducted with twelve participants who played two versions of the same board game within a controlled environment. One version was analog, and the other featuring AR. After each of the versions, the participants answered a questionnaire about the experience as described in the Game Experience Questionnaire. Results. The results show that participants thought the AR board game was a fun and interesting take on the traditional board games. However, participants also thought that the AR stability and the discomfort of holding up a mobile phone while playing was a worse experience. The statistical results also concluded that there was no significant difference between AR and none-AR board game versions.Conclusions. With the results gathered, the experience were more or less the same. Participants thought the AR version of the board game was fun and interesting because it improved their sense of discovery and imagination. However, because the AR felt unstable and uncomfortable, it disrupted the game flow and player immersion. With better implementation and a more suited device, AR could be enjoyable.

  • 15.
    Bergenholtz, Erik
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Moss, Andrew
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Ilie, Dragos
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Casalicchio, Emiliano
    Finding a needle in a haystack: A comparative study of IPv6 scanning methods2019In: 2019 INTERNATIONAL SYMPOSIUM ON NETWORKS, COMPUTERS AND COMMUNICATIONS (ISNCC 2019), IEEE, 2019Conference paper (Refereed)
    Abstract [en]

    It has previously been assumed that the size of anIPv6 network would make it impossible to scan the network forvulnerable hosts. Recent work has shown this to be false, andseveral methods for scanning IPv6 networks have been suggested.However, most of these are based on external information likeDNS, or pattern inference which requires large amounts of knownIP addresses. In this paper, DeHCP, a novel approach based ondelimiting IP ranges with closely clustered hosts, is presentedand compared to three previously known scanning methods. Themethod is shown to work in an experimental setting with resultscomparable to that of the previously suggested methods, and isalso shown to have the advantage of not being limited to a specificprotocol or probing method. Finally we show that the scan canbe executed across multiple VLANs.

  • 16.
    Bergman Martinkauppi, Louise
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    He, Qiuping
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Performance Evaluation and Comparison of Standard Cryptographic Algorithms and Chinese Cryptographic Algorithms2019Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Background. China is regulating the import, export, sale, and use of encryption technology in China. If any foreign company wants to develop or release a product in China, they need to report their use of any encryption technology to the Office of State Commercial Cryptography Administration (OSCCA) to gain approval. SM2, SM3, and SM4 are cryptographic standards published by OSCCA and are authorized to be used in China. To comply with Chinese cryptography laws organizations and companies may have to replace standard cryptographic algorithms in their systems with Chinese cryptographic algorithms, such as SM2, SM3, and SM4. It is important to know beforehand how the replacement of algorithms will impact performance to determine future system costs. Objectives. Perform a theoretical study and performance comparison of the standard cryptographic algorithms and Chinese Cryptographic algorithms. The standard cryptographic algorithms studied are RSA, ECDSA, SHA-256, and AES-128, and the Chinese cryptographic algorithms studied are SM2, SM3, and SM4. Methods. A literature analysis was conducted to gain knowledge and collect information about the selected cryptographic algorithms in order to make a theoretical comparison of the algorithms. An experiment was conducted to get measurements of how the algorithms perform and to be able to rate them. Results. The literature analysis provides a comparison that identifies design similarities and differences between the algorithms. The controlled experiment provides measurements of the metrics of the algorithms mentioned in objectives. Conclusions. The conclusions are that the digital signature algorithms SM2 and ECDSA have similar design and also similar performance. SM2 and RSA have fundamentally different designs, and SM2 performs better than RSA when generating keys and signatures. When verifying signatures, RSA shows comparable performance in some cases and worse performance in other cases. Hash algorithms SM3 and SHA-256 have many design similarities, but SHA-256 performs slightly better than SM3. AES-128 and SM4 have many similarities but also a few differences. In the controlled experiment, AES-128 outperforms SM4 with a significant margin.

  • 17.
    Bertoni, Alessandro
    et al.
    Blekinge Institute of Technology, Faculty of Engineering, Department of Mechanical Engineering. Blekinge Institute of Technology.
    Hallstedt, Sophie
    Blekinge Institute of Technology, Faculty of Engineering, Department of Strategic Sustainable Development. Blekinge Institute of Technology.
    Dasari, Siva Krishna
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science. Blekinge Institute of Technology.
    Andersson, Petter
    GKN Aerospace Engine Systems, SWE.
    Integration of Value and Sustainability Assessment in Design Space Exploration by Machine Learning: An Aerospace Application2020In: Design ScienceArticle in journal (Refereed)
    Abstract [en]

    The use of decision-making models in the early stages of the development of complex products and technologies is a well-established practice in industry. Engineers rely on well-established statistical and mathematical models to explore the feasible design space and make early decisions on future design configurations. At the same time, researchers in both value-driven design and sustainable product development areas have stressed the need to expand the design space exploration by encompassing value and sustainability-related considerations. A portfolio of methods and tools for decision support regarding value and sustainability integration has been proposed in literature, but very few have seen an integration in engineering practices. This paper proposes an approach, developed and tested in collaboration with an aerospace subsystem manufacturer, featuring the integration of value-driven design and sustainable product development models in the established practices for design space exploration. The proposed approach uses early simulation results as input for value and sustainability models, automatically computing value and sustainability criteria as an integral part of the design space exploration. Machine learning is applied to deal with the different levels of granularity and maturity of information among early simulations, value models, and sustainability models, as well as for the creation of reliable surrogate models for multidimensional design analysis. The paper describes the logic and rationale of the proposed approach and its application to the case of a turbine rear structure for commercial aircraft engines. Finally, the paper discusses the challenges of the approach implementation and highlights relevant research directions across the value-driven design, sustainable product development, and machine learning research fields.

  • 18.
    Bisen, Pradeep Siddhartha Singh
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Predicting Operator’s Choice During Airline Disruption Using Machine Learning Methods2019Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    This master thesis is a collaboration with Jeppesen, a Boeing company to attempt applying machine learning techniques to predict “When does Operator manually solve the disruption? If he chooses to use Optimiser, then which option would he choose? And why?”. Through the course of this project, various techniques are employed to study, analyze and understand the historical labeled data of airline consisting of alerts during disruptions and tries to classify each data point into one of the categories: manual or optimizer option. This is done using various supervised machine learning classification methods.

  • 19.
    Boeva, Veselka
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Angelova, M.
    Angelova, Milena
    Technical University of Sofia, BUL.
    Vishnu Manasa, Devagiri
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Tsiporkova, Elena
    EluciDATA Lab, Sirris, BEL.
    Bipartite Split-Merge Evolutionary Clustering2019In: Lect. Notes Comput. Sci., Springer , 2019, p. 204-223Conference paper (Refereed)
    Abstract [en]

    We propose a split-merge framework for evolutionary clustering. The proposed clustering technique, entitled Split-Merge Evolutionary Clustering is supposed to be more robust to concept drift scenarios by providing the flexibility to consider at each step a portion of the data and derive clusters from it to be used subsequently to update the existing clustering solution. The proposed framework is built around the idea to model two clustering solutions as a bipartite graph, which guides the update of the existing clustering solution by merging some clusters with ones from the newly constructed clustering while others are transformed by splitting their elements among several new clusters. We have evaluated and compared the discussed evolutionary clustering technique with two other state of the art algorithms: a bipartite correlation clustering (PivotBiCluster) and an incremental evolving clustering (Dynamic split-and-merge). © Springer Nature Switzerland AG 2019.

  • 20.
    Boeva, Veselka
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Nordahl, Christian
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Modeling Evolving User Behavior via Sequential Clustering2019Conference paper (Refereed)
    Abstract [en]

    In this paper we address the problem of modeling the evolution of clusters over time by applying sequential clustering. We propose a sequential partitioning algorithm that can be applied for grouping distinct snapshots of streaming data so that a clustering model is built on each data snapshot. The algorithm is initialized by a clustering solution built on available historical data. Then a new clustering solution is generated on each data snapshot by applying a partitioning algorithm seeded with the centroids of the clustering model obtained at the previous time interval. At each step the algorithm also conducts model adapting operations in order to reflect the evolution in the clustering structure. In that way, it enables to deal with both incremental and dynamic aspects of modeling evolving behavior problems. In addition, the proposed approach is able to trace back evolution through the detection of clusters' transitions, such as splits and merges. We have illustrated and initially evaluated our ideas on household electricity consumption data. The results have shown that the proposed sequential clustering algorithm is robust to modeling evolving behavior by being enable to mine changes and update the model, respectively.

  • 21.
    Boinapally, Kashyap
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Security Certificate Renewal Management2019Independent thesis Advanced level (degree of Master (Two Years)), 80 credits / 120 HE creditsStudent thesis
    Abstract [en]

    Context. An SSL encrypted client-server communication is necessary to maintain the security and privacy of the communication. For an SSL encryption to work, there should be a security certificate which has a certain expiry period. Periodic renewal of the certificate after its expiry is a waste of time and an effort on part of the company.

    Objectives. In this study, a new system has been developed and implemented, which sends a certificate during prior communication and does not wait for the certificate to expire. Automating the process to a certain extent was done to not compromise the security of the system and to speed up the process and reduce the downtime.

    Methods. Experiments have been conducted to test the new system and compare it to the old system. The experiments were conducted to analyze the packets and the downtime occurring from certificate renewal.

    Results. The results of the experiments show that there is a significant reduction in downtime. This was achieved due to the implementation of the new system and semi-automation

    Conclusions. The system has been implemented, and it greatly reduces the downtime occurring due to the expiry of the security certificates. Semi-Automation has been done to not hamper the security and make the system robust.

  • 22.
    Boldt, Martin
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Boeva, Veselka
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Borg, Anton
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Multi-expert estimations of burglars' risk exposure and level of pre-crime preparation using coded crime scene data: Work in progress2018In: Proceedings - 2018 European Intelligence and Security Informatics Conference, EISIC 2018 / [ed] Brynielsson, J, Institute of Electrical and Electronics Engineers Inc. , 2018, p. 77-80Conference paper (Refereed)
    Abstract [en]

    Law enforcement agencies strive to link crimes perpetrated by the same offenders into crime series in order to improve investigation efficiency. Such crime linkage can be done using both physical traces (e.g., DNA or fingerprints) or 'soft evidence' in the form of offenders' modus operandi (MO), i.e. their behaviors during crimes. However, physical traces are only present for a fraction of crimes, unlike behavioral evidence. This work-in-progress paper presents a method for aggregating multiple criminal profilers' ratings of offenders' behavioral characteristics based on feature-rich crime scene descriptions. The method calculates consensus ratings from individual experts' ratings, which then are used as a basis for classification algorithms. The classification algorithms can automatically generalize offenders' behavioral characteristics from cues in the crime scene data. Models trained on the consensus rating are evaluated against models trained on individual profiler's ratings. Thus, whether the consensus model shows improved performance over individual models. © 2018 IEEE.

  • 23.
    Boldt, Martin
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Borg, Anton
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Ickin, Selim
    Ericsson Research, SWE.
    Gustafsson, Jörgen
    Ericsson Research, SWE.
    Anomaly detection of event sequences using multiple temporal resolutions and Markov chains2020In: Knowledge and Information Systems, ISSN 0219-1377, E-ISSN 0219-3116, Vol. 62, p. 669-686Article in journal (Refereed)
    Abstract [en]

    Streaming data services, such as video-on-demand, are getting increasingly more popular, and they are expected to account for more than 80% of all Internet traffic in 2020. In this context, it is important for streaming service providers to detect deviations in service requests due to issues or changing end-user behaviors in order to ensure that end-users experience high quality in the provided service. Therefore, in this study we investigate to what extent sequence-based Markov models can be used for anomaly detection by means of the end-users’ control sequences in the video streams, i.e., event sequences such as play, pause, resume and stop. This anomaly detection approach is further investigated over three different temporal resolutions in the data, more specifically: 1 h, 1 day and 3 days. The proposed anomaly detection approach supports anomaly detection in ongoing streaming sessions as it recalculates the probability for a specific session to be anomalous for each new streaming control event that is received. Two experiments are used for measuring the potential of the approach, which gives promising results in terms of precision, recall, F 1 -score and Jaccard index when compared to k-means clustering of the sessions. © 2019, The Author(s).

  • 24.
    Bond, David
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Nyblom, Madelein
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Evaluation of four different virtual locomotion techniques in an interactive environment2019Independent thesis Basic level (degree of Bachelor), 10 credits / 15 HE creditsStudent thesis
    Abstract [en]

    Background: Virtual Reality (VR) devices are becoming more and more common as game systems. Even though modern VR Head Mounted Displays (HMD) allow the user to walk in real life, it still limits the user to the space of the room they are playing in and the player will need virtual locomotion in games where the environment size exceeds that of the real life play area. Evaluations of multiple VR locomotion techniques have already been done, usually evaluating motion sickness or usability. A common theme in many of these is that the task is search based, in an environment with low focus on interaction. Therefore in this thesis, four VR locomotion techniques are evaluated in an environment with focus on interaction, to see if a difference exists and whether one technique is optimal. The VR locomotion techniques are: Arm-Swinging, Point-Tugging, Teleportation, and Trackpad.

    Objectives: A VR environment is created with focus on interaction in this thesis. In this environment the user has to grab and hold onto objects while using a locomotion technique. This study then evaluates which VR locomotion technique is preferred in the environment. This study also evaluates whether there is a difference in preference and motion sickness, in an environment with high focus in interaction compared to one with low focus.

    Methods: A user study was conducted with 15 participants. Every participant performed a task with every VR locomotion technique, which involved interaction. After each technique, the participant answered a simulator sickness questionnaire, and an overall usability questionnaire.

    Results: The results achieved in this thesis indicated that Arm-Swinging was the most enjoyed locomotion technique in the overall usability questionnaire. But it also showed that Teleportation had the best rating in tiredness and overwhelment. Teleportation also did not cause motion sickness, while the rest of the locomotion techniques did.

    Conclusions: As a conclusion, a difference can be seen for VR locomotion techniques between an environment with low focus on interaction, to an environment with high focus. This difference was seen in both the overall usability questionnaire and the motion sickness questionnaire. It was concluded that Arm-Swinging could be the most fitting VR locomotion technique for an interactive environment, however Teleportation could be more optimal for longer sessions.

  • 25.
    Borg, Anton
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Boldt, Martin
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Svensson, Johan
    Telenor Sverige AB, SWE.
    Using conformal prediction for multi-label document classification in e-Mail support systems2019In: ADVANCES AND TRENDS IN ARTIFICIAL INTELLIGENCE: FROM THEORY TO PRACTICE / [ed] Wotawa, F; Friedrich, G; Pill, I; KoitzHristov, R; Ali, M, Springer Verlag , 2019, Vol. 11536, p. 308-322Conference paper (Refereed)
    Abstract [en]

    For any corporation the interaction with its customers is an important business process. This is especially the case for resolving various business-related issues that customers encounter. Classifying the type of such customer service e-mails to provide improved customer service is thus important. The classification of e-mails makes it possible to direct them to the most suitable handler within customer service. We have investigated the following two aspects of customer e-mail classification within a large Swedish corporation. First, whether a multi-label classifier can be introduced that performs similarly to an already existing multi-class classifier. Second, whether conformal prediction can be used to quantify the certainty of the predictions without loss in classification performance. Experiments were used to investigate these aspects using several evaluation metrics. The results show that for most evaluation metrics, there is no significant difference between multi-class and multi-label classifiers, except for Hamming loss where the multi-label approach performed with a lower loss. Further, the use of conformal prediction did not introduce any significant difference in classification performance for neither the multi-class nor the multi-label approach. As such, the results indicate that conformal prediction is a useful addition that quantifies the certainty of predictions without negative effects on the classification performance, which in turn allows detection of statistically significant predictions. © Springer Nature Switzerland AG 2019.

  • 26.
    Brodd, Adam
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Eriksson, Andreas
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    User perception on procedurally generated cities affected with a heightmapped terrain parameter2019Independent thesis Basic level (university diploma), 10 credits / 15 HE creditsStudent thesis
    Abstract [en]

    Context: Procedural content generation shortened PCG is a way of letting the computer algorithmically generate data, with little input from programmers. Procedural content generation is a useful tool for developers to create game worlds, content and much more, which can be tedious and time-consuming to do by hand.Objectives: The procedural generation of both a city and height-mapped terrain parameter using Perlin noise and the terrain parameters effect on the city is explored in this thesis. The objective is to find out if a procedurally generated city with a heightmap parameter using Perlin noise is viable for use in games. Methods: An implementation generating both a height-mapped terrain parameter and city using Perlin noise has been created, along with that a user survey to test the generated city and terrain parameters viability in games. Results: This work successfully implemented an application that can generate cities affected with a heightmapped terrain parameter that is viable for use in games. Conclusions: This work concludes that it is possible to generate cities affected with a height-mapped terrain parameter by utilizing the noise algorithm Perlin noise. The generated cities and terrains are both viable and believable for use in games.

  • 27.
    Carlsson, Anders
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Kuzminykh, Ievgeniia
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Gustavsson, Rune
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Virtual Security Labs Supporting Distance Education in ReSeLa Framework2019In: Advances in Intelligent Systems and Computing / [ed] Auer M.E.,Tsiatsos T., Springer Verlag , 2019, Vol. 917, p. 577-587Conference paper (Refereed)
    Abstract [en]

    To meet the high demand of educating the next generation of MSc students in Cyber security, we propose a well-composed curriculum and a configurable cloud based learning support environment ReSeLa. The proposed system is a result of the EU TEMPUS project ENGENSEC and has been extensively validated and tested. © 2019, Springer Nature Switzerland AG.

  • 28.
    Casalicchio, Emiliano
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Iannucci, Stefano
    Mississippi State University, .
    The state-of-the-art in container technologies: Application, orchestration and security2020In: Concurrency and Computation, ISSN 1532-0626, E-ISSN 1532-0634, article id e5668Article in journal (Refereed)
    Abstract [en]

    Containerization is a lightweight virtualization technology enabling the deployment and execution of distributed applications on cloud, edge/fog, and Internet-of-Things platforms. Container technologies are evolving at the speed of light, and there are many open research challenges. In this paper, an extensive literature review is presented that identifies the challenges related to the adoption of container technologies in High Performance Computing, Big Data analytics, and geo-distributed (Edge, Fog, Internet-of-Things) applications. From our study, it emerges that performance, orchestration, and cyber-security are the main issues. For each challenge, the state-of-the-art solutions are then analyzed. Performance is related to the assessment of the performance footprint of containers and comparison with the footprint of virtual machines and bare metal deployments, the monitoring, the performance prediction, the I/O throughput improvement. Orchestration is related to the selection, the deployment, and the dynamic control of the configuration of multi-container packaged applications on distributed platforms. The focus of this work is on run-time adaptation. Cyber-security is about container isolation, confidentiality of containerized data, and network security. From the analysis of 97 papers, it came out that the state-of-the-art is more mature in the area of performance evaluation and run-time adaptation rather than in security solutions. However, the main unsolved challenges are I/O throughput optimization, performance prediction, multilayer monitoring, isolation, and data confidentiality (at rest and in transit). © 2020 John Wiley & Sons, Ltd.

  • 29.
    Cavallin, Fritjof
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Pettersson, Timmie
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Real-time View-dependent Triangulation of Infinite Ray Cast Terrain2019Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Background. Ray marching is a technique that can be used to render images of infinite terrains defined by a height field by sampling consecutive points along a ray until the terrain surface is intersected. However, this technique can be expensive, and does not generate a mesh representation, which may be useful in certain use cases.

    Objectives. The aim of the thesis is to implement an algorithm for view-dependent triangulation of infinite terrains in real-time without making use of any preprocessed data, and compare the performance and visual quality of the implementation with that of a ray marched solution.

    Methods. Performance metrics for both implementations are gathered and compared. Rendered images from both methods are compared using an image quality assessment algorithm.

    Results. In all tests performed, the proposed method performs better in terms of frame rate than a ray marched version. The visual similarity between the two methods highly depend on the quality setting of the triangulation.

    Conclusions. The proposed method can perform better than a ray marched version, but is more reliant on CPU processing, and can suffer from visual popping artifacts as the terrain is refined.

  • 30.
    Chadalawada, Sai Krishna
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Real Time Detection and Recognition of Construction Vehicles: Using Deep Learning Methods2020Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Background. The driving conditions of construction vehicles and their surrounding environment is different from the traditional transportation vehicles. As a result,they face unique challenges while operating in the construction/evacuation sites.Therefore, there needs to be research carried-out to address these challenges while implementing autonomous driving, although the learning approach for construction vehicles is the same as for traditional transportation vehicles such as cars.

    Objectives. The following objectives have been identified to fulfil the aim of this thesis work. To identify suitable and highly efficient CNN models for real-time object recognition and tracking of construction vehicles. Evaluate the classification performance of these CNN models. Compare the results among one another and present the results.

    Methods. To answer the research questions, Literature review and Experiment have been identified as the appropriate research methodologies. Literature review has been performed to identify suitable object detection models for real-time object recognition and tracking. Following this, experiments have been conducted to evaluate the performance of the selected object detection models.

    Results. Faster R-CNN model, YOLOv3 and Tiny-YOLOv3 have been identified from the literature review as the most suitable and efficient algorithms for detecting and tracking scaled construction vehicles in real-time. The classification performance of these algorithms has been calculated and compared with each other. The results have been presented.

    Conclusions. The F1 score and accuracy of YOLOv3 has been found to be better amongst the algorithms, followed by Faster R-CNN. Therefore, it has been concluded that YOLOv3 is the best algorithm in the real-time detection and tracking of scaled construction vehicles. The results are similar to the classification performance comparison of these three algorithms provided in the literature.

  • 31.
    Chapala, Usha Kiran
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Peteti, Sridhar
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Continuous Video Quality of Experience Modelling using Machine Learning Model Trees1996Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Adaptive video streaming is perpetually influenced by unpredictable network conditions, whichcauses playback interruptions like stalling, rebuffering and video bit rate fluctuations. Thisleads to potential degradation of end-user Quality of Experience (QoE) and may make userchurn from the service. Video QoE modelling that precisely predicts the end users QoE underthese unstable conditions is taken into consideration quickly. The root cause analysis for thesedegradations is required for the service provider. These sudden changes in trend are not visiblefrom monitoring the data from the underlying network service. Thus, this is challenging toknow this change and model the instantaneous QoE. For this modelling continuous time, QoEratings are taken into consideration rather than the overall end QoE rating per video. To reducethe user risk of churning the network providers should give the best quality to the users.

    In this thesis, we proposed the QoE modelling to analyze the user reactions change over timeusing machine learning models. The machine learning models are used to predict the QoEratings and change patterns in ratings. We test the model on video Quality dataset availablepublicly which contains the user subjective QoE ratings for the network distortions. M5P modeltree algorithm is used for the prediction of user ratings over time. M5P model gives themathematical equations and leads to more insights by given equations. Results of the algorithmshow that model tree is a good approach for the prediction of the continuous QoE and to detectchange points of ratings. It is shown that to which extent these algorithms are used to estimatechanges. The analysis of model provides valuable insights by analyzing exponential transitionsbetween different level of predicted ratings. The outcome provided by the analysis explains theuser behavior when the quality decreases the user ratings decrease faster than the increase inquality with time. The earlier work on the exponential transitions of instantaneous QoE overtime is supported by the model tree to the user reaction to sudden changes such as video freezes.

  • 32.
    Chen, Hangdong
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Using Blockchain for improving communication efficiency and cooperation: the case of port logistics2020Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
  • 33.
    Chen, Xiaoran
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Image enhancement effect on the performance of convolutional neural networks2019Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Context. Image enhancement algorithms can be used to enhance the visual effects of images in the field of human vision. So can image enhancement algorithms be used in the field of computer vision? The convolutional neural network, as the most powerful image classifier at present, has excellent performance in the field of image recognition. This paper explores whether image enhancement algorithms can be used to improve the performance of convolutional neural networks.

    Objectives. The purpose of this paper is to explore the effect of image enhancement algorithms on the performance of CNN models in deep learning and transfer learning, respectively. The article selected five different image enhancement algorithms, they are the contrast limited adaptive histogram equalization (CLAHE), the successive means of the quantization transform (SMQT), the adaptive gamma correction, the wavelet transform, and the Laplace operator.

    Methods. In this paper, experiments are used as research methods. Three groups of experiments are designed; they respectively explore whether the enhancement of grayscale images can improve the performance of CNN in deep learning, whether the enhancement of color images can improve the performance of CNN in deep learning and whether the enhancement of RGB images can improve the performance of CNN in transfer learning?Results. In the experiment, in deep learning, when training a complete CNN model, using the Laplace operator to enhance the gray image can improve the recall rate of CNN. However, the remaining image enhancement algorithms cannot improve the performance of CNN in both grayscale image datasets and color image datasets. In addition, in transfer learning, when fine-tuning the pre-trained CNN model, using contrast limited adaptive histogram equalization (CLAHE), successive means quantization transform (SMQT), Wavelet transform, and Laplace operator will reduce the performance of CNN.

    Conclusions. Experiments show that in deep learning, using image enhancement algorithms may improve CNN performance when training complete CNN models, but not all image enhancement algorithms can improve CNN performance; in transfer learning, when fine-tuning the pre- trained CNN model, image enhancement algorithms may reduce the performance of CNN.

  • 34.
    Chitturi, Gayatri
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Building Detection in Deformed Satellite Images Using Mask R-CNN2020Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Background:

    In the recent research of automatic building detection, aerial and satellite images are used. Automatic building detection from satellite images is useful for urban planning, after natural disasters for identifying the voids. It is time consuming and inefficient to detect buildings from satellite images with human effort so a deep learning based Mask R-CNN (Mask Regional-Convolutional Neural Network) is used to detect and segment the buildings from the satellite images. To evaluate the performance of the model, different augmentations are implemented on the test dataset. TTA (Test Time Augmentation) wrapper is used to evaluate the performance of the trained model on the test dataset to state how accurately the building is detected for each augmentation.

    Objectives:

    The main goal of this research is to formulate a model which should be able to detect and segment every building by using the data (provided by Sony) and that should also be scalable to identify different buildings across different countries from the satellite images. Also the model should be able to deliver the result with an Average Precision in the range of 0.5 to 1 even after every augmentation is applied to the test dataset.

    Methods:

    To obtain results with Average Precision within the desired range, a systematic literature review has been conducted to choose the suitable algorithm to detect and segment the building. After the systematic literature review, Mask R-CNN was found to be an effective and impressive algorithm for detection and as well as segmentation. Generally, to increase the size of dataset and the performance of the model, augmentation is applied while training. An experiment is conducted to formulate a model for building detection which was established and trained without augmentation using Mask R-CNN as the dataset provided is already large in size. The aim of the research is to evaluate the performance of the model trained but not to improve. So, TTA is an application of augmentation which is implemented on the test dataset for evaluation the performance of the trained model.

    Results:

    After the literature review, the Mask R-CNN algorithm is used to formulate a model for building detection and segmentation. The image predicted by the model without augmentation is applied with TTA on test dataset to calculate an Average Precision and a mAP (mean Average Precision) for all the augmented images. The Average Precision values for different augmentations are found out to be in the range of 0.5 to 1 except for the Noise augmentation which is below the desired range.

    Conclusions:

    Mask R-CNN model preformed well for the prediction. Average Precision value for each augmented value is calculated with TTA. Best augmentations to detect and segment the buildings are horizontally flipped, vertically flipped, bright and contrast. These augmentations having good performance. Noise augmentation has low performance. For the best combination of augmentations, noise augmentation can be excluded.

  • 35.
    Dan, Sjödahl
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Cascaded Voxel Cone-Tracing Shadows: A Computational Performance Study2019Independent thesis Basic level (degree of Bachelor), 10 credits / 15 HE creditsStudent thesis
    Abstract [en]

    Background. Real-time shadows in 3D applications have for decades been implemented with a solution called Shadow Mapping or some variant of it. This is a solution that is easy to implement and has good computational performance, nevertheless it does suffer from some problems and limitations. But there are newer alternatives and one of them is based on a technique called Voxel Cone-Tracing. This can be combined with a technique called Cascading to create Cascaded Voxel Cone-Tracing Shadows (CVCTS).

    Objectives. To measure the computational performance of CVCTS to get better insight into it and provide data and findings to help developers make an informed decision if this technique is worth exploring. And to identify where the performance problems with the solution lies.

    Methods. A simple implementation of CVCTS was implemented in OpenGL aimed at simulating a solution that could be used for outdoor scenes in 3D applications. It had several different parameters that could be changed. Then computational performance measurements were made with these different parameters set at different settings.

    Results. The data was collected and analyzed before drawing conclusions. The results showed several parts of the implementation that could potentially be very slow and why this was the case.

    Conclusions. The slowest parts of the CVCTS implementation was the Voxelization and Cone-Tracing steps. It might be possible to use the CVCTS solution in the thesis in for example a game if the settings are not too high but that is a stretch. Little time could be spent during the thesis to optimize the solution and thus it’s possible that its performance could be increased.

  • 36.
    Dasari, Siva Krishna
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science. Blekinge Institute of Technology.
    Tree Models for Design Space Exploration in Aerospace Engineering2019Licentiate thesis, comprehensive summary (Other academic)
    Abstract [en]

    A crucial issue in the design of aircraft components is the evaluation of a larger number of potential design alternatives. This evaluation involves too expensive procedures, consequently, it slows down the search for optimal design samples. As a result, scarce or small number of design samples with high dimensional parameter space and high non-linearity pose issues in learning of surrogate models. Furthermore, surrogate models have more issues in handling qualitative data (discrete) than in handling quantitative data (continuous). These issues bring the need for investigations of methods of surrogate modelling for the most effective use of available data. 

     The thesis goal is to support engineers in the early design phase of development of new aircraft engines, specifically, a component of the engine known as Turbine Rear Structure (TRS). For this, tree-based approaches are explored for surrogate modelling for the purpose of exploration of larger search spaces and for speeding up the evaluations of design alternatives. First, we have investigated the performance of tree models on the design concepts of TRS. Second, we have presented an approach to explore design space using tree models, Random Forests. This approach includes hyperparameter tuning, extraction of parameters importance and if-then rules from surrogate models for a better understanding of the design problem. With this presented approach, we have shown that the performance of tree models improved by hyperparameter tuning when using design concepts data of TRS. Third, we performed sensitivity analysis to study the thermal variations on TRS and hence support robust design using tree models. Furthermore, the performance of tree models has been evaluated on mathematical linear and non-linear functions. The results of this study have shown that tree models fit well on non-linear functions. Last, we have shown how tree models support integration of value and sustainability parameters data (quantitative and qualitative data) together with TRS design concepts data in order to assess these parameters impact on the product life cycle in the early design phase.

     

  • 37.
    Dasari, Siva Krishna
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science. Blekinge Institute of Technology.
    Cheddad, Abbas
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science. Blekinge Institute of Technology.
    Andersson, Petter
    GKN Aerospace Engine Systems, SWE.
    Predictive Modelling to Support Sensitivity Analysis for Robust Design in Aerospace Engineering2020In: Structural and multidisciplinary optimization (Print), ISSN 1615-147X, E-ISSN 1615-1488Article in journal (Refereed)
    Abstract [en]

    The design of aircraft engines involves computationally expensive engineering simulations. One way to solve this problem is the use of response surface models to approximate the high-fidelity time-consuming simulations while reducing computational time. For a robust design, sensitivity analysis based on these models allows for the efficient study of uncertain variables’ effect on system performance. The aim of this study is to support sensitivity analysis for a robust design in aerospace engineering. For this, an approach is presented in which random forests (RF) and multivariate adaptive regression splines (MARS) are explored to handle linear and non-linear response types for response surface modelling. Quantitative experiments are conducted to evaluate the predictive performance of these methods with Turbine Rear Structure (a component of aircraft) case study datasets for response surface modelling. Furthermore, to test these models’ applicability to perform sensitivity analysis, experiments are conducted using mathematical test problems (linear and non-linear functions) and their results are presented. From the experimental investigations, it appears that RF fits better on non-linear functions compared with MARS, whereas MARS fits well on linear functions.

  • 38.
    Dasari, Siva Krishna
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Cheddad, Abbas
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Andersson, Petter
    GKN Aerospace Engine Systems, SWE.
    Random Forest Surrogate Models to Support Design Space Exploration in Aerospace Use-case2019In: IFIP Advances in Information and Communication Technology, Springer-Verlag New York, 2019, Vol. 559Conference paper (Refereed)
    Abstract [en]

    In engineering, design analyses of complex products rely on computer simulated experiments. However, high-fidelity simulations can take significant time to compute. It is impractical to explore design space by only conducting simulations because of time constraints. Hence, surrogate modelling is used to approximate the original simulations. Since simulations are expensive to conduct, generally, the sample size is limited in aerospace engineering applications. This limited sample size, and also non-linearity and high dimensionality of data make it difficult to generate accurate and robust surrogate models. The aim of this paper is to explore the applicability of Random Forests (RF) to construct surrogate models to support design space exploration. RF generates meta-models or ensembles of decision trees, and it is capable of fitting highly non-linear data given quite small samples. To investigate the applicability of RF, this paper presents an approach to construct surrogate models using RF. This approach includes hyperparameter tuning to improve the performance of the RF's model, to extract design parameters' importance and \textit{if-then} rules from the RF's models for better understanding of design space. To demonstrate the approach using RF, quantitative experiments are conducted with datasets of Turbine Rear Structure use-case from an aerospace industry and results are presented.

  • 39.
    Elwardy, Majed
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Zepernick, Hans-Juergen
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Chu, Thi My Chinh
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Sundstedt, Veronica
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Objective Perceptual Video Quality Prediction Using Spatial and Temporal Information Differences2019In: Proceedings - 2019 19th International Symposium on Communications and Information Technologies, ISCIT 2019, Institute of Electrical and Electronics Engineers Inc. , 2019, p. 436-441Conference paper (Refereed)
    Abstract [en]

    In this paper, objective perceptual video quality models are proposed that use spatial and temporal perceptual information differences for predicting video quality as perceived by human observers. Spatial perceptual information characterizes the complexity and temporal perceptual information quantifies the motion contained in a video. As such, differences in the spatial and temporal perceptual information of a reference video (original) and test video (processed) may be used to predict quality of videos that have undergone encoding, transmission, or other processing. In particular, several video quality prediction functions are derived using curve fitting along with training and validation on data from a publicly available annotated database. The obtained functions provide predicted mean opinion scores as a measure of perceptual quality subject to spatial and temporal perceptual information differences. The analysis of the video quality prediction performance of the proposed models shows that differences in spatial and temporal perceptual information can be used for objective video quality prediction. © 2019 IEEE.

  • 40.
    Fiati-Kumasenu, Albert
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Extracting Customer Sentiments from Email Support Tickets: A case for email support ticket prioritisation2019Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Background

    Daily, companies generate enormous amounts of customer support tickets which are grouped and placed in specialised queues, based on some characteristics, from where they are resolved by the customer support personnel (CSP) on a first-in-first-out basis. Given that these tickets require different levels of urgency, a logical next step to improving the effectiveness of the CSPs is to prioritise the tickets based on business policies. Among the several heuristics that can be used in prioritising tickets is sentiment polarity.

    Objectives

    This study investigates how machine learning methods and natural language techniques can be leveraged to automatically predict the sentiment polarity of customer support tickets using.

    Methods

    Using a formal experiment, the study examines how well Support Vector Machine (SVM), Naive Bayes (NB) and Logistic Regression (LR) based sentiment polarity prediction models built for the product and movie reviews, can be used to make sentiment predictions on email support tickets. Due to the limited size of annotated email support tickets, Valence Aware Dictionary and sEntiment Reasoner (VADER) and cluster ensemble - using k-means, affinity propagation and spectral clustering, is investigated for making sentiment polarity prediction.

    Results

    Compared to NB and LR, SVM performs better, scoring an average f1-score of .71 whereas NB scores least with a .62 f1-score. SVM, combined with the presence vector, outperformed the frequency and TF-IDF vectors with an f1-score of .73 while NB records an f1-score of .63. Given an average f1-score of .23, the models transferred from the movie and product reviews performed inadequately even when compared with a dummy classifier with an f1-score average of .55. Finally, the cluster ensemble method outperformed VADER with an f1-score of .61 and .53 respectively.

    Conclusions

    Given the results, SVM, combined with a presence vector of bigrams and trigrams is a candidate solution for extracting sentiments from email support tickets. Additionally, transferring sentiment models from the movie and product reviews domain to the email support tickets is not possible. Finally, given that there exists a limited dataset for conducting sentiment analysis studies in the Swedish and the customer support context, a cluster ensemble is recommended as a sample selection method for generating annotated data.

  • 41.
    Fiedler, Markus
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Technology and Aesthetics.
    Zepernick, Hans-Juergen
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Kelkkanen, Viktor
    Blekinge Institute of Technology, Faculty of Computing, Department of Technology and Aesthetics.
    Network-induced temporal disturbances in virtual reality applications2019In: 2019 11th International Conference on Quality of Multimedia Experience, QoMEX 2019, Institute of Electrical and Electronics Engineers Inc. , 2019Conference paper (Refereed)
    Abstract [en]

    Virtual Reality (VR) applications put high demands on software and hardware in order to enable an immersive experience for the user and avoid causing simulator sickness. As soon as networks become part of the Motion-To-Photon (MTP) path between rendering and display, there is a risk for extraordinary delays that may impair Quality of Experience (QoE). This short paper provides an overview of latency measurements and models that are applicable to the MTP path, complemented by demands on user and network levels. It specifically reports on freeze duration measurements using a commercial TPCAST wireless VR solution, and identifies a corresponding stochastic model of the freeze length distribution, which may serve as disturbance model for VR QoE studies. © 2019 IEEE.

  • 42.
    Floderus, Sebastian
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Rosenholm, Linus
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    An educational experiment in discovering spear phishing attacks2019Independent thesis Basic level (degree of Bachelor), 10 credits / 15 HE creditsStudent thesis
    Abstract [en]

    Background: Spear phishing attacks uses social engineering targeting a specific person to steal credential information or infect the users computer with malware. It is often done through emails and it can be very hard to spot the difference between a legitimate email and a scam email. Cybercrime is a growing problem and there is many ways to inform and educate individuals on the subject.Objectives: This study intends to perform an experiment to see if an educationalsupport tool can be used to better identify phishing emails. Furthermore see if there is a difference in susceptibility between students from different university programs. Methods: A qualitative research study was used to get the necessary understanding how to properly develop a phishing educational tool. A Pretest-Posttest experiment is done to see if there is an improvement in result between an experimental group that received education and the control group that did not. Results: The result shows an overall higher score for the technical program compared to the non-technical. Comparing the pretest with the posttest shows an increase in score for the non-technical program and a decrease in score for the technical program. Furthermore 58% of the non-technical students who started the test did not complete it. Conclusions: There is a noticeable difference in susceptibility between the programs for detecting scam emails for students. However further research is needed in order to explore to what extent the education process had an impact.

  • 43.
    Folino, Emil
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Självrättande programmeringstenta2019Report (Other academic)
    Abstract [sv]

    Hur kan vi på bästa sätt examinera grundläggande programmeringskunskaper

    i en inledande programmeringskurs? Vi skapade en självrättande

    examinationsform där studenterna under tentan kan få feedback och

    ökade genomströmningen med 20%.

  • 44.
    Fransson, Jonatan
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Hiiriskoski, Teemu
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Measuring Immersion and Enjoyment in a 2D Top-Down Game by Replacing the Mouse Input with Eye Tracking2019Independent thesis Basic level (degree of Bachelor), 10 credits / 15 HE creditsStudent thesis
    Abstract [en]

    Background. Eye tracking has been evaluated and tried in different 2D settings for research purposes. Most commercial games that are using eye tracking use it as an assistive extra input method and are focused around third or first person. There are few 2D games developed with eye tracking as an input method. This thesis aims to test the use of eye tracking as a replacement input method with a chosen set of mechanics for the purpose of playing a 2D top-down game with eye tracking as the main input method.

    Objectives. To test eye tracking in a 2D top-down game and use it as a replacement input method for the mouse in a novel effort to evaluate immersion and enjoyment.

    Method. To conduct this study the Tobii 4C eye tracker is used as the replacement peripheral in a 2D game prototype developed for the study. The game prototype is developed with the Unity game engine which the participants played through twice with a different input mode each time. Once with a keyboard and mouse and a second time with a keyboard and an eye tracker. The participants played different modes in alternating order to not sway the results. For the game prototype three different mechanics were implemented, to aim, search for hidden items and remove shadows. To measure immersion and enjoyment an experiment was carried out in a controlled manner, letting participants play through the game prototype and evaluating their experience. To evaluate the experience the participants answered a questionnaire with 12 questions relating to their perceived immersion and a small interview with 5 questions about their experience and perceived enjoyment. The study had a total of 12 participants.

    Results. The results from the data collected through the experiment indicate that the participants enjoyed and felt more involvement in the game, 10 out of 12 participants answering that they felt more involved with the game using eye tracking compared to the mouse. Analyzing the interviews, the participants stated that eye tracking made the game more difficult and less natural to control compared to the mouse. There is a potential problem that might sway the results toward eye tracking, most participants stated that eye tracking is a new experience and none of the participants had used it to play video games before.

    Conclusions. The results from the questionnaire prove the hypothesis with statistics, with a p-value of 0.02 < 5% for both increased involvement and enjoyment using eye tracking. Although the result might be biased due to the participant's inexperience with eye tracking in video games. Most of the participants reacted positively towards eye tracking with the most common reason being that it was a new experience to them.

  • 45.
    Gaddam, Yeshwanth Reddy
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Sales Forecasting of Truck Components using Neural Networks2020Independent thesis Advanced level (degree of Master (One Year)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Background: Sales Forecasting plays a substantial role in identifying the sales trends of products for the future era in any organization. These forecasts are also important for determining the profitable retail operations to meet customer demand, maintain storage levels and to identify probable losses.

    Objectives: This study is to investigate appropriate machine learning algorithms for forecasting the sales of truck components and then conduct experiments to forecast sales with the selected machine learning algorithms and to evaluate the performances of the models using performance metrics obtained from the literature review.

    Methods: Initially, a literature review is performed to identify machine learning methods suitable for forecasting the sales of truck components and then based on the results obtained, several experiments were conducted to evaluate the performances of the chosen models.

    Results: Based on the literature review Multilayer Perceptron (MLP), RecurrentNeural Network (RNN) and Long Short Term Memory (LSTM) have been selected for forecasting the sales of truck components and results from the experiments showed that LSTM performed well compared to MLP and RNN for predicting sales.

    Conclusions: From this research, It can be stated that LSTM can model com-plex nonlinear functions compared to MLP and RNN for the chosen dataset. Hence, LSTM is chosen as the ideal model for predicting sales of truck components.

  • 46.
    García Martín, Eva
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Energy Efficiency in Machine Learning: Approaches to Sustainable Data Stream Mining2020Doctoral thesis, comprehensive summary (Other academic)
    Abstract [en]

    Energy efficiency in machine learning explores how to build machine learning algorithms and models with low computational and power requirements. Although energy consumption is starting to gain interest in the field of machine learning, still the majority of solutions focus on obtaining the highest predictive accuracy, without a clear focus on sustainability.

    This thesis explores green machine learning, which builds on green computing and computer architecture to design sustainable and energy efficient machine learning algorithms. In particular, we investigate how to design machine learning algorithms that automatically learn from streaming data in an energy efficient manner.

    We first illustrate how energy can be measured in the context of machine learning, in the form of a literature review and a procedure to create theoretical energy models. We use this knowledge to analyze the energy footprint of Hoeffding trees, presenting an energy model that maps the number of computations and memory accesses to the main functionalities of the algorithm. We also analyze the hardware events correlated to the execution of the algorithm, their functions and their hyper parameters.

    The final contribution of the thesis is showcased by two novel extensions of Hoeffding tree algorithms, the Hoeffding tree with nmin adaptation and the Green Accelerated Hoeffding Tree. These solutions are able to reduce their energy consumption by twenty and thirty percent, with minimal effect on accuracy. This is achieved by setting an individual splitting criteria for each branch of the decision tree, spending more energy on the fast growing branches and saving energy on the rest.

    This thesis shows the importance of evaluating energy consumption when designing machine learning algorithms, proving that we can design more energy efficient algorithms and still achieve competitive accuracy results.

  • 47.
    García Martín, Eva
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Rodrigues, Crefeda Faviola
    University of Manchester, GBR.
    Riley, Graham
    University of Manchester, GBR.
    Grahn, Håkan
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Estimation of energy consumption in machine learning2019In: Journal of Parallel and Distributed Computing, ISSN 0743-7315, E-ISSN 1096-0848, Vol. 134, p. 75-88Article in journal (Refereed)
    Abstract [en]

    Energy consumption has been widely studied in the computer architecture field for decades. While the adoption of energy as a metric in machine learning is emerging, the majority of research is still primarily focused on obtaining high levels of accuracy without any computational constraint. We believe that one of the reasons for this lack of interest is due to their lack of familiarity with approaches to evaluate energy consumption. To address this challenge, we present a review of the different approaches to estimate energy consumption in general and machine learning applications in particular. Our goal is to provide useful guidelines to the machine learning community giving them the fundamental knowledge to use and build specific energy estimation methods for machine learning algorithms. We also present the latest software tools that give energy estimation values, together with two use cases that enhance the study of energy consumption in machine learning.

  • 48.
    García-Martín, Eva
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Bifet, Albert
    Télécom ParisTech.
    Lavesson, Niklas
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Energy Modeling of Hoeffding Tree EnsemblesIn: Intelligent Data Analysis, ISSN 1088-467X, E-ISSN 1571-4128Article in journal (Refereed)
    Abstract [en]

    Energy consumption reduction has been an increasing trend in machine learning over the past few years due to its socio-ecological importance. In new challenging areas such as edge computing, energy consumption and predictive accuracy are key variables during algorithm design and implementation. State-of-the-art ensemble stream mining algorithms are able to create highly accurate predictions at a substantial energy cost. This paper introduces the nmin adaptation method to ensembles of Hoeffding tree algorithms, to further reduce their energy consumption without sacrificing accuracy. We also present extensive theoretical energy models of such algorithms, detailing their energy patterns and how nmin adaptation affects their energy consumption. We have evaluated the energy efficiency and accuracy of the nmin adaptation method on five different ensembles of Hoeffding trees under 11 publicly available datasets. The results show that we are able to reduce the energy consumption significantly, by 21 % on average, affecting accuracy by less than one percent on average.

  • 49.
    García-Martín, Eva
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Bifet, Albert
    Télécom Paris.
    Lavesson, Niklas
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Green Accelerated Hoeffding TreeManuscript (preprint) (Other academic)
    Abstract [en]

    For the past years, the main concern in machine learning had been to create highly accurate models, without considering the high computational requirements involved. Stream mining algorithms are able to produce highly accurate models in real time, without strong computational demands. This is the case of the Hoeffding tree algorithm. Recent extensions to this algorithm, such as the Extremely Very Fast Decision Tree (EFDT), focus on increasing predictive accuracy, but at the cost of a higher energy consumption. This paper presents the Green Accelerated Hoeffding Tree (GAHT) algorithm, which is able to achieve same levels of accuracy as the latest EFDT, while reducing its energy consumption by 27 percent with minimal effect on accuracy.

  • 50.
    García-Martín, Eva
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Lavesson, Niklas
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Grahn, Håkan
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Casalicchio, Emiliano
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Boeva, Veselka
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Energy-Aware Very Fast Decision TreeIn: Journal of Data Science and Analytics, ISSN 2364-415XArticle in journal (Refereed)
    Abstract [en]

    Recently machine learning researchers are designing algorithms that can run in embedded and mobile devices, which introduces additional constraints compared to traditional algorithm design approaches. One of these constraints is energy consumption, which directly translates to battery capacity for these devices. Streaming algorithms, such as the Very Fast Decision Tree (VFDT), are designed to run in such devices due to their high velocity and low memory requirements. However, they have not been designed with an energy efficiency focus. This paper addresses this challenge by presenting the nmin adaptation method, which reduces the energy consumption of the VFDT algorithm with only minor effects on accuracy. nmin adaptation allows the algorithm to grow faster in those branches where there is more confidence to create a split, and delays the split on the less confident branches. This removes unnecessary computations related to checking for splits but maintains similar levels of accuracy. We have conducted extensive experiments on 29 public datasets, showing that the VFDT with nmin adaptation consumes up to 31% less energy than the original VFDT, and up to 96% less energy than the CVFDT (VFDT adapted for concept drift scenarios), trading off up to 1.7 percent of accuracy.

1234 1 - 50 of 151
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf