Change search
Refine search result
12 1 - 50 of 92
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Rows per page
  • 5
  • 10
  • 20
  • 50
  • 100
  • 250
Sort
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
Select
The maximal number of hits you can export is 250. When you want to export more records please use the Create feeds function.
  • 1.
    Ahlgren, Filip
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Local And Network Ransomware Detection Comparison2019Independent thesis Basic level (degree of Bachelor), 10 credits / 15 HE creditsStudent thesis
    Abstract [en]

    Background. Ransomware is a malicious application encrypting important files on a victim's computer. The ransomware will ask the victim for a ransom to be paid through cryptocurrency. After the system is encrypted there is virtually no way to decrypt the files other than using the encryption key that is bought from the attacker.

    Objectives. In this practical experiment, we will examine how machine learning can be used to detect ransomware on a local and network level. The results will be compared to see which one has a better performance.

    Methods. Data is collected through malware and goodware databases and then analyzed in a virtual environment to extract system information and network logs. Different machine learning classifiers will be built from the extracted features in order to detect the ransomware. The classifiers will go through a performance evaluation and be compared with each other to find which one has the best performance.

    Results. According to the tests, local detection was both more accurate and stable than network detection. The local classifiers had an average accuracy of 96% while the best network classifier had an average accuracy of 89.6%.

    Conclusions. In this case the results show that local detection has better performance than network detection. However, this can be because the network features were not specific enough for a network classifier. The network performance could have been better if the ransomware samples consisted of fewer families so better features could have been selected.

  • 2.
    Ahmadi Mehri, Vida
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Towards Secure Collaborative AI Service Chains2019Licentiate thesis, comprehensive summary (Other academic)
    Abstract [en]

    At present, Artificial Intelligence (AI) systems have been adopted in many different domains such as healthcare, robotics, automotive, telecommunication systems, security, and finance for integrating intelligence in their services and applications. The intelligent personal assistant such as Siri and Alexa are examples of AI systems making an impact on our daily lives. Since many AI systems are data-driven systems, they require large volumes of data for training and validation, advanced algorithms, computing power and storage in their development process. Collaboration in the AI development process (AI engineering process) will reduce cost and time for the AI applications in the market. However, collaboration introduces the concern of privacy and piracy of intellectual properties, which can be caused by the actors who collaborate in the engineering process.  This work investigates the non-functional requirements, such as privacy and security, for enabling collaboration in AI service chains. It proposes an architectural design approach for collaborative AI engineering and explores the concept of the pipeline (service chain) for chaining AI functions. In order to enable controlled collaboration between AI artefacts in a pipeline, this work makes use of virtualisation technology to define and implement Virtual Premises (VPs), which act as protection wrappers for AI pipelines. A VP is a virtual policy enforcement point for a pipeline and requires access permission and authenticity for each element in a pipeline before the pipeline can be used.  Furthermore, the proposed architecture is evaluated in use-case approach that enables quick detection of design flaw during the initial stage of implementation. To evaluate the security level and compliance with security requirements, threat modeling was used to identify potential threats and vulnerabilities of the system and analyses their possible effects. The output of threat modeling was used to define countermeasure to threats related to unauthorised access and execution of AI artefacts.

  • 3.
    Andres, Bustamante
    et al.
    Tecnológico de Monterrey, MEX.
    Cheddad, Abbas
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Rodriguez-Garcia, Alejandro
    Tecnológico de Monterrey, MEX.
    Digital Image Processing and Development of Machine Learning Models for the Discrimination of Corneal Pathology: An Experimental Model2019Conference paper (Refereed)
  • 4.
    Angelova, Milena
    et al.
    Technical University of sofia, BUL.
    Vishnu Manasa, Devagiri
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Boeva, Veselka
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Linde, Peter
    Blekinge Institute of Technology, The Library.
    Lavesson, Niklas
    An Expertise Recommender System based on Data from an Institutional Repository (DiVA)2019In: Connecting the Knowledge Common from Projects to sustainable Infrastructure: The 22nd International conference on Electronic Publishing - Revised Selected Papers / [ed] Leslie Chan, Pierre Mounier, OpenEdition Press , 2019, p. 135-149Chapter in book (Refereed)
    Abstract [en]

    Finding experts in academics is an important practical problem, e.g. recruiting reviewersfor reviewing conference, journal or project submissions, partner matching for researchproposals, finding relevant M. Sc. or Ph. D. supervisors etc. In this work, we discuss anexpertise recommender system that is built on data extracted from the Blekinge Instituteof Technology (BTH) instance of the institutional repository system DiVA (DigitalScientific Archive). DiVA is a publication and archiving platform for research publicationsand student essays used by 46 publicly funded universities and authorities in Sweden andthe rest of the Nordic countries (www.diva-portal.org). The DiVA classification system isbased on the Swedish Higher Education Authority (UKÄ) and the Statistic Sweden's (SCB)three levels classification system. Using the classification terms associated with studentM. Sc. and B. Sc. theses published in the DiVA platform, we have developed a prototypesystem which can be used to identify and recommend subject thesis supervisors in academy.

  • 5.
    Anwar, Mahwish
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Digitalization in Container Terminal Logistics: A Literature Review2019In: 27th Annual Conference of International Association of Maritime Economists (IAME), 2019, p. 1-25, article id 141Conference paper (Refereed)
    Abstract [en]

    Many terminals that are located in large ports, such as Port of Rotterdam, Port of Singapore, Port of Hamburg, etc. employ various emerging digital technologies to handle container and information. Some technologies deemed attractive by large ports are: Artificial Intelligence (AI), Cloud Computing, Blockchain and Internet of Things (IoT). The objective of this paper is to review the “state-of-the-art” of scientific literature on digital technologies that facilitate operations management for container terminal logistics. The studies are synthesized in form of a classification matrix and analysis performed. The primary studies consisted of 57 papers, out of the initial pool of over 2100 findings. Over 94% of the publications identified focused on AI; while 29% exploited IoT and Cloud Computing technologies combined. The research on Blockchain within the context of container terminal was nonexistent. Majority of the publications utilized numerical experiments and simulation for validation. A large amount of the scientific literature was dedicated to resource management and scheduling of intra-logistic equipment/vessels or berth or container storage in the yard. Results drawn from the literature survey indicate that various research gaps exist. A discussion and an analysis of review is presented, which could be of benefit for stakeholders of small-medium sized container terminals.

  • 6.
    Anwar, Mahwish
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Henesey, Lawrence
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Casalicchio, Emiliano
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    The feasibility of Blockchain solutions in the maritime industry2019Conference paper (Other academic)
    Abstract [en]

    Purpose / Value

    The concept of Blockchain technology in supply chain management is well discussed, yet

    inadequately theorized in terms of its applicability, especially within the maritime industry,

    which forms a fundamental node of the entire supply chain network. More so, the assumptive

    grounds associated with the technology have not been openly articulated, leading to unclear

    ideas about its applicability.

    Design/methodology/approach

    The research is designed divided into two Stages. This paper (Stage one) enhanced

    literature review for data collection in order to gauge the properties of the Blockchain

    technology, and to understand and map those characteristics with the Bill of Lading

    process within maritime industry. In Stage two an online questionnaire is conducted to

    assess the feasibility of Blockchain technology for different maritime use-cases.

    Findings

    The research that was collected and analysed partly from deliverable in the

    Connect2SmallPort Project and from other literature suggests that Blockchain can be an

    enabler for improving maritime supply chain. The use-case presented in this paper highlights

    the practicality of the technology. It was identified that Blockchain possess characteristics

    suitable to mitigate the risks and issues pertaining to the paper-based Bill of Lading process.

    Research limitations

    The study would mature further after the execution of the Stage Two. By the end of both

    Stages, a framework for Blockchain adoption with a focus on the maritime industry would

    be proposed.

    Practical implications

    The proposed outcome indicated the practicality of technology, which could be beneficial

    for the port stakeholders that wish to use Blockchain in processing Bill of Lading or

    contracts.

    Social implications

    The study may influence the decision makers to consider the benefits of using the Blockchain

    technology, thereby, creating opportunities for the maritime industry to leverage the

    technology with government’s support.

  • 7.
    Arredal, Martin
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Eye Tracking’s Impact on Player Performance and Experience in a 2D Space Shooter Video Game.2018Independent thesis Basic level (degree of Bachelor), 10 credits / 15 HE creditsStudent thesis
    Abstract [en]

    Background. Although a growing market, most of the commercially available gamestoday that features eye tracking support is rendered in a 3D perspective. Games ren-dered in 2D have seen little support for eye trackers from developers. By comparing the differences in player performance and experience between an eye tracker and acomputer mouse when playing a classic 2D genre: space shooter, this thesis aim tomake an argument for the implementation of eye tracking in 2D video games.

    Objectives. Create a 2D space shooter video game where movement will be handledthrough a keyboard but the input method for aiming will alter between a computermouse and an eye tracker.

    Methods. Using a Tobii EyeX eye tracker, an experiment was conducted with fif-teen participants. To measure their performance, three variables was used: accuracy,completion time and collisions. The participants played two modes of a 2D spaceshooter video game in a controlled environment. Depending on which mode wasplayed, the input method for aiming was either an eye tracker or a computer mouse.The movement was handled using a keyboard for both modes. When the modes hadbeen completed, a questionnaire was presented where the participants would ratetheir experience playing the game with each input method.

    Results. The computer mouse had a better performance in two out of three per-formance variables. On average the computer mouse had a better accuracy andcompletion time but more collisions. However, the data gathered from the question-naire shows that the participants had on average a better experience when playingwith an eye tracker

    Conclusions. The results from the experiment shows a better performance for par-ticipants using the computer mouse, but participants felt more immersed with the eyetracker and giving it a better score on all experience categories. With these results,this study hope to encourage developers to implement eye tracking as an interactionmethod for 2D video games. However, future work is necessary to determine if theexperience and performance increase or decrease as the playtime gets longer.

  • 8.
    Avdic, Adnan
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Ekholm, Albin
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Anomaly Detection in an e-Transaction System using Data Driven Machine Learning Models: An unsupervised learning approach in time-series data2019Independent thesis Basic level (degree of Bachelor), 10 credits / 15 HE creditsStudent thesis
    Abstract [en]

    Background: Detecting anomalies in time-series data is a task that can be done with the help of data driven machine learning models. This thesis will investigate if, and how well, different machine learning models, with an unsupervised approach,can detect anomalies in the e-Transaction system Ericsson Wallet Platform. The anomalies in our domain context is delays on the system.

    Objectives: The objectives of this thesis work is to compare four different machine learning models ,in order to find the most relevant model. The best performing models are decided by the evaluation metric F1-score. An intersection of the best models are also being evaluated in order to decrease the number of False positives in order to make the model more precise.

    Methods: Investigating a relevant time-series data sample with 10-minutes interval data points from the Ericsson Wallet Platform was used. A number of steps were taken such as, handling data, pre-processing, normalization, training and evaluation.Two relevant features was trained separately as one-dimensional data sets. The two features that are relevant when finding delays in the system which was used in this thesis is the Mean wait (ms) and the feature Mean * N were the N is equal to the Number of calls to the system. The evaluation metrics that was used are True positives, True Negatives, False positives, False Negatives, Accuracy, Precision, Recall, F1-score and Jaccard index. The Jaccard index is a metric which will reveal how similar each algorithm are at their detection. Since the detection are binary, it’s classifying the each data point in the time-series data.

    Results: The results reveals the two best performing models regards to the F1-score.The intersection evaluation reveals if and how well a combination of the two best performing models can reduce the number of False positives.

    Conclusions: The conclusion to this work is that some algorithms perform better than others. It is a proof of concept that such classification algorithms can separate normal from non-normal behavior in the domain of the Ericsson Wallet Platform.

  • 9.
    Bandari Swamy Devender, Vamshi Krishna
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Adike, Sneha
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Design and Performance of an Event Handling and Analysis Platform for vSGSN-MME event using the ELK stack2019Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Data Logging is the main activity to be considered in maintaining a server or database in working condition without any errors or failures. Data collection can be automatic, so, no human presence is necessary. To store the data of logs for many days and visualizing became a huge problem in recent days. Coming to node SGSN-MME, which is the main component of the GPRS network, which handles all packet switched data within the mobile operator's network. A lot of log data is generated and stored in file systems on the redundant File Server Boards in SGSN-MME node. The evolution of the SGSN-MME is taking it from dedicated, purpose-built, hardware into virtual machines in the Cloud, where virtual file server boards fit very badly. The purpose of this thesis is to give a better way to store the log data and add visualization using the ELK stack concept. Fetching useful information from logs is one of the most important part of this stack and is being done in Logstash using its grok filters and a set of input, filter and output plug-ins which helps to scale this functionality for taking various kinds of inputs ( file,TCP, UDP, gemfire, stdin, UNIX, web sockets and even IRC and twitter and many more) , filter them using (groks,grep,date filters etc.)and finally write output to ElasticSearch. The Research Methodology involved in carrying out this thesis work is a Qualitative approach. A study is carried using the ELK concept with respect to Legacy approach in Ericsson company. A suitable approach and the better possible solution is given to the vSGSN-MME node to store the log data. Also to provide the performance and uses multiple users of input providers and provides the analysis of the graphs from the results and analysis. To perform the tests accurately, readings are taken in defined failure scenarios. From the test cases, a plot is provided on the CPU load in vSGSN-MME which easily gives the suitable and best promising way.

  • 10.
    Bergman Martinkauppi, Louise
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    He, Qiuping
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Performance Evaluation and Comparison of Standard Cryptographic Algorithms and Chinese Cryptographic Algorithms2019Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Background. China is regulating the import, export, sale, and use of encryption technology in China. If any foreign company wants to develop or release a product in China, they need to report their use of any encryption technology to the Office of State Commercial Cryptography Administration (OSCCA) to gain approval. SM2, SM3, and SM4 are cryptographic standards published by OSCCA and are authorized to be used in China. To comply with Chinese cryptography laws organizations and companies may have to replace standard cryptographic algorithms in their systems with Chinese cryptographic algorithms, such as SM2, SM3, and SM4. It is important to know beforehand how the replacement of algorithms will impact performance to determine future system costs. Objectives. Perform a theoretical study and performance comparison of the standard cryptographic algorithms and Chinese Cryptographic algorithms. The standard cryptographic algorithms studied are RSA, ECDSA, SHA-256, and AES-128, and the Chinese cryptographic algorithms studied are SM2, SM3, and SM4. Methods. A literature analysis was conducted to gain knowledge and collect information about the selected cryptographic algorithms in order to make a theoretical comparison of the algorithms. An experiment was conducted to get measurements of how the algorithms perform and to be able to rate them. Results. The literature analysis provides a comparison that identifies design similarities and differences between the algorithms. The controlled experiment provides measurements of the metrics of the algorithms mentioned in objectives. Conclusions. The conclusions are that the digital signature algorithms SM2 and ECDSA have similar design and also similar performance. SM2 and RSA have fundamentally different designs, and SM2 performs better than RSA when generating keys and signatures. When verifying signatures, RSA shows comparable performance in some cases and worse performance in other cases. Hash algorithms SM3 and SHA-256 have many design similarities, but SHA-256 performs slightly better than SM3. AES-128 and SM4 have many similarities but also a few differences. In the controlled experiment, AES-128 outperforms SM4 with a significant margin.

  • 11.
    Bertoni, Alessandro
    et al.
    Blekinge Institute of Technology, Faculty of Engineering, Department of Mechanical Engineering. Blekinge Institute of Technology.
    Hallstedt, Sophie
    Blekinge Institute of Technology, Faculty of Engineering, Department of Strategic Sustainable Development. Blekinge Institute of Technology.
    Dasari, Siva Krishna
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science. Blekinge Institute of Technology.
    Andersson, Petter
    GKN Aerospace Engine Systems, SWE.
    Integration of Value and Sustainability Assessment in Design Space Exploration by Machine Learning: An Aerospace Application2019In: Design ScienceArticle in journal (Refereed)
  • 12.
    Björkman, Adam
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Kardos, Max
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Threat Analysis of Smart Home Assistants Involving Novel Acoustic Based Attack-Vectors2019Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Background. Smart home assistants are becoming more common in our homes. Often taking the form of a speaker, these devices enable communication via voice commands. Through this communication channel, users can for example order a pizza, check the weather, or call a taxi. When a voice command is given to the assistant, the command is sent to cloud services over the Internet, enabling a multitude of functions associated with risks regarding security and privacy. Furthermore, with an always active Internet connection, smart home assistants are a part of the Internet of Things, a type of historically not secure devices. Therefore, it is crucial to understand the security situation and the risks that a smart home assistant brings with it.

    Objectives. This thesis aims to investigate and compile threats towards smart home assistants in a home environment. Such a compilation could be used as a foundation during the creation of a formal model for securing smart home assistants and other devices with similar properties.

    Methods. Through literature studies and threat modelling, current vulnerabilities towards smart home assistants and systems with similar properties were found and compiled. A few  vulnerabilities were tested against two smart home assistants through experiments to verify which vulnerabilities are present in a home environment. Finally, methods for the prevention and protection of the vulnerabilities were found and compiled.

    Results. Overall, 27 vulnerabilities towards smart home assistants and 12 towards similar systems were found and identified. The majority of the found vulnerabilities focus on exploiting the voice interface. In total, 27 methods to prevent vulnerabilities in smart home assistants or similar systems were found and compiled. Eleven of the found vulnerabilities did not have any reported protection methods. Finally, we performed one experiment consisting of four attacks against two smart home assistants with mixed results; one attack was not successful, while the others were either completely or partially successful in exploiting the target vulnerabilities.

    Conclusions. We conclude that vulnerabilities exist for smart home assistants and similar systems. The vulnerabilities differ in execution difficulty and impact. However, we consider smart home assistants safe enough to usage with the accompanying protection methods activated.

  • 13.
    Boeva, Veselka
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Nordahl, Christian
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Modeling Evolving User Behavior via Sequential Clustering2019Conference paper (Refereed)
    Abstract [en]

    In this paper we address the problem of modeling the evolution of clusters over time by applying sequential clustering. We propose a sequential partitioning algorithm that can be applied for grouping distinct snapshots of streaming data so that a clustering model is built on each data snapshot. The algorithm is initialized by a clustering solution built on available historical data. Then a new clustering solution is generated on each data snapshot by applying a partitioning algorithm seeded with the centroids of the clustering model obtained at the previous time interval. At each step the algorithm also conducts model adapting operations in order to reflect the evolution in the clustering structure. In that way, it enables to deal with both incremental and dynamic aspects of modeling evolving behavior problems. In addition, the proposed approach is able to trace back evolution through the detection of clusters' transitions, such as splits and merges. We have illustrated and initially evaluated our ideas on household electricity consumption data. The results have shown that the proposed sequential clustering algorithm is robust to modeling evolving behavior by being enable to mine changes and update the model, respectively.

  • 14.
    Boinapally, Kashyap
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Security Certificate Renewal Management2019Independent thesis Advanced level (degree of Master (Two Years)), 80 credits / 120 HE creditsStudent thesis
    Abstract [en]

    Context. An SSL encrypted client-server communication is necessary to maintain the security and privacy of the communication. For an SSL encryption to work, there should be a security certificate which has a certain expiry period. Periodic renewal of the certificate after its expiry is a waste of time and an effort on part of the company.

    Objectives. In this study, a new system has been developed and implemented, which sends a certificate during prior communication and does not wait for the certificate to expire. Automating the process to a certain extent was done to not compromise the security of the system and to speed up the process and reduce the downtime.

    Methods. Experiments have been conducted to test the new system and compare it to the old system. The experiments were conducted to analyze the packets and the downtime occurring from certificate renewal.

    Results. The results of the experiments show that there is a significant reduction in downtime. This was achieved due to the implementation of the new system and semi-automation

    Conclusions. The system has been implemented, and it greatly reduces the downtime occurring due to the expiry of the security certificates. Semi-Automation has been done to not hamper the security and make the system robust.

  • 15.
    Boldt, Martin
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Boeva, Veselka
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Borg, Anton
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Multi-expert estimations of burglars' risk exposure and level of pre-crime preparation using coded crime scene data: Work in progress2018In: Proceedings - 2018 European Intelligence and Security Informatics Conference, EISIC 2018 / [ed] Brynielsson, J, Institute of Electrical and Electronics Engineers Inc. , 2018, p. 77-80Conference paper (Refereed)
    Abstract [en]

    Law enforcement agencies strive to link crimes perpetrated by the same offenders into crime series in order to improve investigation efficiency. Such crime linkage can be done using both physical traces (e.g., DNA or fingerprints) or 'soft evidence' in the form of offenders' modus operandi (MO), i.e. their behaviors during crimes. However, physical traces are only present for a fraction of crimes, unlike behavioral evidence. This work-in-progress paper presents a method for aggregating multiple criminal profilers' ratings of offenders' behavioral characteristics based on feature-rich crime scene descriptions. The method calculates consensus ratings from individual experts' ratings, which then are used as a basis for classification algorithms. The classification algorithms can automatically generalize offenders' behavioral characteristics from cues in the crime scene data. Models trained on the consensus rating are evaluated against models trained on individual profiler's ratings. Thus, whether the consensus model shows improved performance over individual models. © 2018 IEEE.

  • 16.
    Boldt, Martin
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Borg, Anton
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Ickin, Selim
    Ericsson Research, SWE.
    Gustafsson, Jörgen
    Ericsson Research, SWE.
    Anomaly detection of event sequences using multiple temporal resolutions and Markov chains2019In: Knowledge and Information Systems, ISSN 0219-1377, E-ISSN 0219-3116Article in journal (Refereed)
    Abstract [en]

    Streaming data services, such as video-on-demand, are getting increasingly more popular, and they are expected to account for more than 80% of all Internet traffic in 2020. In this context, it is important for streaming service providers to detect deviations in service requests due to issues or changing end-user behaviors in order to ensure that end-users experience high quality in the provided service. Therefore, in this study we investigate to what extent sequence-based Markov models can be used for anomaly detection by means of the end-users’ control sequences in the video streams, i.e., event sequences such as play, pause, resume and stop. This anomaly detection approach is further investigated over three different temporal resolutions in the data, more specifically: 1 h, 1 day and 3 days. The proposed anomaly detection approach supports anomaly detection in ongoing streaming sessions as it recalculates the probability for a specific session to be anomalous for each new streaming control event that is received. Two experiments are used for measuring the potential of the approach, which gives promising results in terms of precision, recall, F 1 -score and Jaccard index when compared to k-means clustering of the sessions. © 2019, The Author(s).

  • 17.
    Bond, David
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Nyblom, Madelein
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Evaluation of four different virtual locomotion techniques in an interactive environment2019Independent thesis Basic level (degree of Bachelor), 10 credits / 15 HE creditsStudent thesis
    Abstract [en]

    Background: Virtual Reality (VR) devices are becoming more and more common as game systems. Even though modern VR Head Mounted Displays (HMD) allow the user to walk in real life, it still limits the user to the space of the room they are playing in and the player will need virtual locomotion in games where the environment size exceeds that of the real life play area. Evaluations of multiple VR locomotion techniques have already been done, usually evaluating motion sickness or usability. A common theme in many of these is that the task is search based, in an environment with low focus on interaction. Therefore in this thesis, four VR locomotion techniques are evaluated in an environment with focus on interaction, to see if a difference exists and whether one technique is optimal. The VR locomotion techniques are: Arm-Swinging, Point-Tugging, Teleportation, and Trackpad.

    Objectives: A VR environment is created with focus on interaction in this thesis. In this environment the user has to grab and hold onto objects while using a locomotion technique. This study then evaluates which VR locomotion technique is preferred in the environment. This study also evaluates whether there is a difference in preference and motion sickness, in an environment with high focus in interaction compared to one with low focus.

    Methods: A user study was conducted with 15 participants. Every participant performed a task with every VR locomotion technique, which involved interaction. After each technique, the participant answered a simulator sickness questionnaire, and an overall usability questionnaire.

    Results: The results achieved in this thesis indicated that Arm-Swinging was the most enjoyed locomotion technique in the overall usability questionnaire. But it also showed that Teleportation had the best rating in tiredness and overwhelment. Teleportation also did not cause motion sickness, while the rest of the locomotion techniques did.

    Conclusions: As a conclusion, a difference can be seen for VR locomotion techniques between an environment with low focus on interaction, to an environment with high focus. This difference was seen in both the overall usability questionnaire and the motion sickness questionnaire. It was concluded that Arm-Swinging could be the most fitting VR locomotion technique for an interactive environment, however Teleportation could be more optimal for longer sessions.

  • 18.
    Borg, Anton
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Boldt, Martin
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Svensson, Johan
    Telenor Sverige AB, SWE.
    Using conformal prediction for multi-label document classification in e-Mail support systems2019In: Lect. Notes Comput. Sci., Springer Verlag , 2019, Vol. 11536, p. 308-322Conference paper (Refereed)
    Abstract [en]

    For any corporation the interaction with its customers is an important business process. This is especially the case for resolving various business-related issues that customers encounter. Classifying the type of such customer service e-mails to provide improved customer service is thus important. The classification of e-mails makes it possible to direct them to the most suitable handler within customer service. We have investigated the following two aspects of customer e-mail classification within a large Swedish corporation. First, whether a multi-label classifier can be introduced that performs similarly to an already existing multi-class classifier. Second, whether conformal prediction can be used to quantify the certainty of the predictions without loss in classification performance. Experiments were used to investigate these aspects using several evaluation metrics. The results show that for most evaluation metrics, there is no significant difference between multi-class and multi-label classifiers, except for Hamming loss where the multi-label approach performed with a lower loss. Further, the use of conformal prediction did not introduce any significant difference in classification performance for neither the multi-class nor the multi-label approach. As such, the results indicate that conformal prediction is a useful addition that quantifies the certainty of predictions without negative effects on the classification performance, which in turn allows detection of statistically significant predictions. © Springer Nature Switzerland AG 2019.

  • 19.
    Brodd, Adam
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Eriksson, Andreas
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    User perception on procedurally generated cities affected with a heightmapped terrain parameter2019Independent thesis Basic level (university diploma), 10 credits / 15 HE creditsStudent thesis
    Abstract [en]

    Context: Procedural content generation shortened PCG is a way of letting the computer algorithmically generate data, with little input from programmers. Procedural content generation is a useful tool for developers to create game worlds, content and much more, which can be tedious and time-consuming to do by hand.Objectives: The procedural generation of both a city and height-mapped terrain parameter using Perlin noise and the terrain parameters effect on the city is explored in this thesis. The objective is to find out if a procedurally generated city with a heightmap parameter using Perlin noise is viable for use in games. Methods: An implementation generating both a height-mapped terrain parameter and city using Perlin noise has been created, along with that a user survey to test the generated city and terrain parameters viability in games. Results: This work successfully implemented an application that can generate cities affected with a heightmapped terrain parameter that is viable for use in games. Conclusions: This work concludes that it is possible to generate cities affected with a height-mapped terrain parameter by utilizing the noise algorithm Perlin noise. The generated cities and terrains are both viable and believable for use in games.

  • 20.
    Carlsson, Anders
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Kuzminykh, Ievgeniia
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Gustavsson, Rune
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Virtual Security Labs Supporting Distance Education in ReSeLa Framework2019In: Advances in Intelligent Systems and Computing / [ed] Auer M.E.,Tsiatsos T., Springer Verlag , 2019, Vol. 917, p. 577-587Conference paper (Refereed)
    Abstract [en]

    To meet the high demand of educating the next generation of MSc students in Cyber security, we propose a well-composed curriculum and a configurable cloud based learning support environment ReSeLa. The proposed system is a result of the EU TEMPUS project ENGENSEC and has been extensively validated and tested. © 2019, Springer Nature Switzerland AG.

  • 21.
    Cavallin, Fritjof
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Pettersson, Timmie
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Real-time View-dependent Triangulation of Infinite Ray Cast Terrain2019Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Background. Ray marching is a technique that can be used to render images of infinite terrains defined by a height field by sampling consecutive points along a ray until the terrain surface is intersected. However, this technique can be expensive, and does not generate a mesh representation, which may be useful in certain use cases.

    Objectives. The aim of the thesis is to implement an algorithm for view-dependent triangulation of infinite terrains in real-time without making use of any preprocessed data, and compare the performance and visual quality of the implementation with that of a ray marched solution.

    Methods. Performance metrics for both implementations are gathered and compared. Rendered images from both methods are compared using an image quality assessment algorithm.

    Results. In all tests performed, the proposed method performs better in terms of frame rate than a ray marched version. The visual similarity between the two methods highly depend on the quality setting of the triangulation.

    Conclusions. The proposed method can perform better than a ray marched version, but is more reliant on CPU processing, and can suffer from visual popping artifacts as the terrain is refined.

  • 22.
    Chapala, Usha Kiran
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Peteti, Sridhar
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Continuous Video Quality of Experience Modelling using Machine Learning Model Trees1996Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Adaptive video streaming is perpetually influenced by unpredictable network conditions, whichcauses playback interruptions like stalling, rebuffering and video bit rate fluctuations. Thisleads to potential degradation of end-user Quality of Experience (QoE) and may make userchurn from the service. Video QoE modelling that precisely predicts the end users QoE underthese unstable conditions is taken into consideration quickly. The root cause analysis for thesedegradations is required for the service provider. These sudden changes in trend are not visiblefrom monitoring the data from the underlying network service. Thus, this is challenging toknow this change and model the instantaneous QoE. For this modelling continuous time, QoEratings are taken into consideration rather than the overall end QoE rating per video. To reducethe user risk of churning the network providers should give the best quality to the users.

    In this thesis, we proposed the QoE modelling to analyze the user reactions change over timeusing machine learning models. The machine learning models are used to predict the QoEratings and change patterns in ratings. We test the model on video Quality dataset availablepublicly which contains the user subjective QoE ratings for the network distortions. M5P modeltree algorithm is used for the prediction of user ratings over time. M5P model gives themathematical equations and leads to more insights by given equations. Results of the algorithmshow that model tree is a good approach for the prediction of the continuous QoE and to detectchange points of ratings. It is shown that to which extent these algorithms are used to estimatechanges. The analysis of model provides valuable insights by analyzing exponential transitionsbetween different level of predicted ratings. The outcome provided by the analysis explains theuser behavior when the quality decreases the user ratings decrease faster than the increase inquality with time. The earlier work on the exponential transitions of instantaneous QoE overtime is supported by the model tree to the user reaction to sudden changes such as video freezes.

  • 23.
    Chen, Xiaoran
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Image enhancement effect on the performance of convolutional neural networks2019Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Context. Image enhancement algorithms can be used to enhance the visual effects of images in the field of human vision. So can image enhancement algorithms be used in the field of computer vision? The convolutional neural network, as the most powerful image classifier at present, has excellent performance in the field of image recognition. This paper explores whether image enhancement algorithms can be used to improve the performance of convolutional neural networks.

    Objectives. The purpose of this paper is to explore the effect of image enhancement algorithms on the performance of CNN models in deep learning and transfer learning, respectively. The article selected five different image enhancement algorithms, they are the contrast limited adaptive histogram equalization (CLAHE), the successive means of the quantization transform (SMQT), the adaptive gamma correction, the wavelet transform, and the Laplace operator.

    Methods. In this paper, experiments are used as research methods. Three groups of experiments are designed; they respectively explore whether the enhancement of grayscale images can improve the performance of CNN in deep learning, whether the enhancement of color images can improve the performance of CNN in deep learning and whether the enhancement of RGB images can improve the performance of CNN in transfer learning?Results. In the experiment, in deep learning, when training a complete CNN model, using the Laplace operator to enhance the gray image can improve the recall rate of CNN. However, the remaining image enhancement algorithms cannot improve the performance of CNN in both grayscale image datasets and color image datasets. In addition, in transfer learning, when fine-tuning the pre-trained CNN model, using contrast limited adaptive histogram equalization (CLAHE), successive means quantization transform (SMQT), Wavelet transform, and Laplace operator will reduce the performance of CNN.

    Conclusions. Experiments show that in deep learning, using image enhancement algorithms may improve CNN performance when training complete CNN models, but not all image enhancement algorithms can improve CNN performance; in transfer learning, when fine-tuning the pre- trained CNN model, image enhancement algorithms may reduce the performance of CNN.

  • 24.
    Dan, Sjödahl
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Cascaded Voxel Cone-Tracing Shadows: A Computational Performance Study2019Independent thesis Basic level (degree of Bachelor), 10 credits / 15 HE creditsStudent thesis
    Abstract [en]

    Background. Real-time shadows in 3D applications have for decades been implemented with a solution called Shadow Mapping or some variant of it. This is a solution that is easy to implement and has good computational performance, nevertheless it does suffer from some problems and limitations. But there are newer alternatives and one of them is based on a technique called Voxel Cone-Tracing. This can be combined with a technique called Cascading to create Cascaded Voxel Cone-Tracing Shadows (CVCTS).

    Objectives. To measure the computational performance of CVCTS to get better insight into it and provide data and findings to help developers make an informed decision if this technique is worth exploring. And to identify where the performance problems with the solution lies.

    Methods. A simple implementation of CVCTS was implemented in OpenGL aimed at simulating a solution that could be used for outdoor scenes in 3D applications. It had several different parameters that could be changed. Then computational performance measurements were made with these different parameters set at different settings.

    Results. The data was collected and analyzed before drawing conclusions. The results showed several parts of the implementation that could potentially be very slow and why this was the case.

    Conclusions. The slowest parts of the CVCTS implementation was the Voxelization and Cone-Tracing steps. It might be possible to use the CVCTS solution in the thesis in for example a game if the settings are not too high but that is a stretch. Little time could be spent during the thesis to optimize the solution and thus it’s possible that its performance could be increased.

  • 25.
    Dasari, Siva Krishna
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science. Blekinge Institute of Technology.
    Tree Models for Design Space Exploration in Aerospace Engineering2019Licentiate thesis, comprehensive summary (Other academic)
    Abstract [en]

    A crucial issue in the design of aircraft components is the evaluation of a larger number of potential design alternatives. This evaluation involves too expensive procedures, consequently, it slows down the search for optimal design samples. As a result, scarce or small number of design samples with high dimensional parameter space and high non-linearity pose issues in learning of surrogate models. Furthermore, surrogate models have more issues in handling qualitative data (discrete) than in handling quantitative data (continuous). These issues bring the need for investigations of methods of surrogate modelling for the most effective use of available data. 

     The thesis goal is to support engineers in the early design phase of development of new aircraft engines, specifically, a component of the engine known as Turbine Rear Structure (TRS). For this, tree-based approaches are explored for surrogate modelling for the purpose of exploration of larger search spaces and for speeding up the evaluations of design alternatives. First, we have investigated the performance of tree models on the design concepts of TRS. Second, we have presented an approach to explore design space using tree models, Random Forests. This approach includes hyperparameter tuning, extraction of parameters importance and if-then rules from surrogate models for a better understanding of the design problem. With this presented approach, we have shown that the performance of tree models improved by hyperparameter tuning when using design concepts data of TRS. Third, we performed sensitivity analysis to study the thermal variations on TRS and hence support robust design using tree models. Furthermore, the performance of tree models has been evaluated on mathematical linear and non-linear functions. The results of this study have shown that tree models fit well on non-linear functions. Last, we have shown how tree models support integration of value and sustainability parameters data (quantitative and qualitative data) together with TRS design concepts data in order to assess these parameters impact on the product life cycle in the early design phase.

     

  • 26.
    Dasari, Siva Krishna
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science. Blekinge Institute of Technology.
    Cheddad, Abbas
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science. Blekinge Institute of Technology.
    Andersson, Petter
    GKN Aerospace Engine Systems, SWE.
    Predictive Modelling to Support Sensitivity Analysis for Robust Design in Aerospace EngineeringIn: Article in journal (Refereed)
  • 27.
    Dasari, Siva Krishna
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Cheddad, Abbas
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Andersson, Petter
    GKN Aerospace Engine Systems, SWE.
    Random Forest Surrogate Models to Support Design Space Exploration in Aerospace Use-case2019In: IFIP Advances in Information and Communication Technology, Springer-Verlag New York, 2019, Vol. 559Conference paper (Refereed)
    Abstract [en]

    In engineering, design analyses of complex products rely on computer simulated experiments. However, high-fidelity simulations can take significant time to compute. It is impractical to explore design space by only conducting simulations because of time constraints. Hence, surrogate modelling is used to approximate the original simulations. Since simulations are expensive to conduct, generally, the sample size is limited in aerospace engineering applications. This limited sample size, and also non-linearity and high dimensionality of data make it difficult to generate accurate and robust surrogate models. The aim of this paper is to explore the applicability of Random Forests (RF) to construct surrogate models to support design space exploration. RF generates meta-models or ensembles of decision trees, and it is capable of fitting highly non-linear data given quite small samples. To investigate the applicability of RF, this paper presents an approach to construct surrogate models using RF. This approach includes hyperparameter tuning to improve the performance of the RF's model, to extract design parameters' importance and \textit{if-then} rules from the RF's models for better understanding of design space. To demonstrate the approach using RF, quantitative experiments are conducted with datasets of Turbine Rear Structure use-case from an aerospace industry and results are presented.

  • 28.
    Fiedler, Markus
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Technology and Aesthetics.
    Zepernick, Hans-Juergen
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Kelkkanen, Viktor
    Blekinge Institute of Technology, Faculty of Computing, Department of Technology and Aesthetics.
    Network-induced temporal disturbances in virtual reality applications2019In: 2019 11th International Conference on Quality of Multimedia Experience, QoMEX 2019, Institute of Electrical and Electronics Engineers Inc. , 2019Conference paper (Refereed)
    Abstract [en]

    Virtual Reality (VR) applications put high demands on software and hardware in order to enable an immersive experience for the user and avoid causing simulator sickness. As soon as networks become part of the Motion-To-Photon (MTP) path between rendering and display, there is a risk for extraordinary delays that may impair Quality of Experience (QoE). This short paper provides an overview of latency measurements and models that are applicable to the MTP path, complemented by demands on user and network levels. It specifically reports on freeze duration measurements using a commercial TPCAST wireless VR solution, and identifies a corresponding stochastic model of the freeze length distribution, which may serve as disturbance model for VR QoE studies. © 2019 IEEE.

  • 29.
    Floderus, Sebastian
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Rosenholm, Linus
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    An educational experiment in discovering spear phishing attacks2019Independent thesis Basic level (degree of Bachelor), 10 credits / 15 HE creditsStudent thesis
    Abstract [en]

    Background: Spear phishing attacks uses social engineering targeting a specific person to steal credential information or infect the users computer with malware. It is often done through emails and it can be very hard to spot the difference between a legitimate email and a scam email. Cybercrime is a growing problem and there is many ways to inform and educate individuals on the subject.Objectives: This study intends to perform an experiment to see if an educationalsupport tool can be used to better identify phishing emails. Furthermore see if there is a difference in susceptibility between students from different university programs. Methods: A qualitative research study was used to get the necessary understanding how to properly develop a phishing educational tool. A Pretest-Posttest experiment is done to see if there is an improvement in result between an experimental group that received education and the control group that did not. Results: The result shows an overall higher score for the technical program compared to the non-technical. Comparing the pretest with the posttest shows an increase in score for the non-technical program and a decrease in score for the technical program. Furthermore 58% of the non-technical students who started the test did not complete it. Conclusions: There is a noticeable difference in susceptibility between the programs for detecting scam emails for students. However further research is needed in order to explore to what extent the education process had an impact.

  • 30.
    Folino, Emil
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Självrättande programmeringstenta2019Report (Other academic)
    Abstract [sv]

    Hur kan vi på bästa sätt examinera grundläggande programmeringskunskaper

    i en inledande programmeringskurs? Vi skapade en självrättande

    examinationsform där studenterna under tentan kan få feedback och

    ökade genomströmningen med 20%.

  • 31.
    Fransson, Jonatan
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Hiiriskoski, Teemu
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Measuring Immersion and Enjoyment in a 2D Top-Down Game by Replacing the Mouse Input with Eye Tracking2019Independent thesis Basic level (degree of Bachelor), 10 credits / 15 HE creditsStudent thesis
    Abstract [en]

    Background. Eye tracking has been evaluated and tried in different 2D settings for research purposes. Most commercial games that are using eye tracking use it as an assistive extra input method and are focused around third or first person. There are few 2D games developed with eye tracking as an input method. This thesis aims to test the use of eye tracking as a replacement input method with a chosen set of mechanics for the purpose of playing a 2D top-down game with eye tracking as the main input method.

    Objectives. To test eye tracking in a 2D top-down game and use it as a replacement input method for the mouse in a novel effort to evaluate immersion and enjoyment.

    Method. To conduct this study the Tobii 4C eye tracker is used as the replacement peripheral in a 2D game prototype developed for the study. The game prototype is developed with the Unity game engine which the participants played through twice with a different input mode each time. Once with a keyboard and mouse and a second time with a keyboard and an eye tracker. The participants played different modes in alternating order to not sway the results. For the game prototype three different mechanics were implemented, to aim, search for hidden items and remove shadows. To measure immersion and enjoyment an experiment was carried out in a controlled manner, letting participants play through the game prototype and evaluating their experience. To evaluate the experience the participants answered a questionnaire with 12 questions relating to their perceived immersion and a small interview with 5 questions about their experience and perceived enjoyment. The study had a total of 12 participants.

    Results. The results from the data collected through the experiment indicate that the participants enjoyed and felt more involvement in the game, 10 out of 12 participants answering that they felt more involved with the game using eye tracking compared to the mouse. Analyzing the interviews, the participants stated that eye tracking made the game more difficult and less natural to control compared to the mouse. There is a potential problem that might sway the results toward eye tracking, most participants stated that eye tracking is a new experience and none of the participants had used it to play video games before.

    Conclusions. The results from the questionnaire prove the hypothesis with statistics, with a p-value of 0.02 < 5% for both increased involvement and enjoyment using eye tracking. Although the result might be biased due to the participant's inexperience with eye tracking in video games. Most of the participants reacted positively towards eye tracking with the most common reason being that it was a new experience to them.

  • 32.
    García Martín, Eva
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Rodrigues, Crefeda Faviola
    University of Manchester, GBR.
    Riley, Graham
    University of Manchester, GBR.
    Grahn, Håkan
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Estimation of energy consumption in machine learning2019In: Journal of Parallel and Distributed Computing, ISSN 0743-7315, E-ISSN 1096-0848, p. 75-88Article in journal (Refereed)
    Abstract [en]

    Energy consumption has been widely studied in the computer architecture field for decades. While the adoption of energy as a metric in machine learning is emerging, the majority of research is still primarily focused on obtaining high levels of accuracy without any computational constraint. We believe that one of the reasons for this lack of interest is due to their lack of familiarity with approaches to evaluate energy consumption. To address this challenge, we present a review of the different approaches to estimate energy consumption in general and machine learning applications in particular. Our goal is to provide useful guidelines to the machine learning community giving them the fundamental knowledge to use and build specific energy estimation methods for machine learning algorithms. We also present the latest software tools that give energy estimation values, together with two use cases that enhance the study of energy consumption in machine learning.

  • 33.
    Ginka, Anusha
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Salapu, Venkata Satya Sameer
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Optimization of Packet Throughput in Docker Containers2019Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Container technology has gained popularity in recent years, mainly because it enables a fast and easy way to package, distribute and deploy applications and services. Latency and throughput have a high impact on user satisfaction in many real-time, critical and large-scale online services. Although the use of microservices architecture in cloud-native applications has enabled advantages in terms of application resilience, scalability, fast software delivery and the use of minimal resources, the packet processing rates are not correspondingly higher. This is mainly due to the overhead imposed by the design and architecture of the network stack. Packet processing rates can be improved by making changes to the network stack and without necessarily adding more powerful hardware.

    In this research, a study of various high-speed packet processing frameworks is presented and a software high-speed packet I/O solution i.e., as hardware agnostic as possible to improve the packet throughput in container technology is identified. The proposed solution is identified based on if the solution involves making changes to the underlying hardware or not. The proposed solution is then evaluated in terms of packet throughput for different container networking modes. A comparison of the proposed solution with a simple UDP client-server application is also presented for different container networking modes. From the results obtained, it is concluded that packet mmap client-server application has higher performance when compared with simple UDP client-server application.

  • 34.
    Goswami, Prashant
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science. BTH.
    Markowicz, Christian
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Hassan, Ali
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Real-time particle-based snow simulation on the GPU2019In: Eurographics Symposium on Parallel Graphics and Visualization / [ed] Hank Childs and Stefan Frey, Porto: Eurographics - European Association for Computer Graphics, 2019Conference paper (Refereed)
    Abstract [en]

    This paper presents a novel real-time particle-based method for simulating snow on the GPU. Our method captures compressionand bonding between snow particles, and incorporates the thermodynamics to model the realistic behavior of snow. Thepresented technique is computationally inexpensive, and is capable of supporting rendering in addition to physics simulation athigh frame rates. The method is completely parallel and is implemented using CUDA. High efficiency and its simplicity makesour method an ideal candidate for integration in existing game SDK frameworks.

  • 35.
    Guo, Yang
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Heterogeneous Knowledge Sharing in eHealth: Modeling, Validation and Application2019Doctoral thesis, comprehensive summary (Other academic)
    Abstract [en]

    Knowledge sharing has become an important issue in the eHealth field for improving the quality of healthcare service. However, since eHealth subject is a multidisciplinary and cross-organizational area, knowledge sharing is a serious challenge when it comes to developing eHealth systems. Thus, this thesis studies the heterogeneous knowledge sharing in eHealth and proposes a knowledge sharing ontology. The study consists of three main parts: modeling, validation and application.

    In the modeling part, knowledge sharing in eHealth is studied from two main aspects: the first aspect is the heterogeneous knowledge of different healthcare actors, and the second aspect is the interactivities among various healthcare actors. In this part, the contribution is to propose an Activity Theory based Ontology (ATO) model to highlight and represent these two aspects of eHealth knowledge sharing, which is helpful for designing efficient eHealth systems.

    In the validation part, a questionnaire based survey is conducted to practically validate the feasibility of the proposed ATO model. The survey results are analyzed to explore the effectiveness of the proposed model for designing efficient knowledge sharing in eHealth. Further, a web based software prototype is constructed to validate the applicability of the ATO model for practical eHealth systems. In this part, the contribution is to explore and show how the proposed ATO model can be validated.

    In the application part, the importance and usefulness of applying the proposed ATO model to solve two real problems are addressed. These two problems are healthcare decision making and appointment scheduling. There is a similar basic challenge in both these problems: a healthcare provider (e.g., a doctor) needs to provide optimal healthcare service (e.g., suitable medicine or fast treatment) to a healthcare receiver (e.g., a patient). Here, the optimization of the healthcare service needs to be achieved in accordance with eHealth knowledge which is distributed in the system and needs to be shared, such as the doctor’s competence, the patient’s health status, and priority control on patients’ diseases. In this part, the contribution is to propose a smart system called eHealth Appointment Scheduling System (eHASS) based on ATO model.

    This research work has been presented in eight conference and journal papers, which, along with an introductory chapter, are included in this compilation thesis.

  • 36.
    Guo, Yang
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science. Blekinge institute of Technology.
    Yao, Yong
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    On Performance of Prioritized Appointment Scheduling for Healthcare2019In: Journal of Service Science and Management, ISSN 1940-9893, E-ISSN 1940-9907, Vol. 12, p. 589-604Article in journal (Refereed)
    Abstract [en]

    Designing the appointment scheduling is a challenging task for the development of healthcare system. The efficient solution approach can provide high-quality healthcare service between care providers (CP)s and care receivers (CR)s. In this paper, we consider the healthcare system with the heterogeneous CRs in terms of urgent and routine CRs. Our suggested model assumes that the system gives the service priority to the urgent CRs by allowing them to interrupt the ongoing routine appointments. An appointment handoff scheme is suggested for the interrupted routine appointments, and thus the routine CRs can attempt to re-establish the appointment scheduling with other available CPs. With these considerations, we study the scheduling performance of the system by using the Markov chains based modeling approach. The numerical analysis is reported and the simulation experiment is conducted to validate the numerical results.

  • 37.
    Gurram, Karthik
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Chappidi, Maheshwar Reddy
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    A Search-Based Approach for Robustness Testing of Web Applications2019Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Context: This thesis deals with the robustness testing of web applications on a different web browser using a Selenium WebDriver to automate the browser. To increase the efficiency of this automation testing, we are using a robustness method. Robustness method is a process of testing the behaviour of a system implementation under exceptional execution conditions to check if it still fulfils some robustness requirements. These robustness tests often apply random algorithms to select the actions to be executed on web applications. The search-based technique was used to automatically generate effective test cases, consisting of initial conditions and fault sequences. The success criteria in most cases: "if it does not crash or hang application, then it is robust".

    Problem: Software testing consumes a lot of time, labour-intensive to write test cases and expensive in a software development life cycle. There was always a need for software testing to decrease the testing time. Manual testing requires a lot of effort and hard work if we measure in terms of person per month [1]. To overcome this problem, we are using a search-based approach for robustness testing of web applications which can dramatically reduce the human effort, time and the costs related to testing.

    Objective: The purpose of this thesis is to develop an automated approach to carry out robustness testing of web applications focusing on revealing defects related to a sequence of events triggered by a web system. To do so, we will employ search-based techniques (e.g., NSGA-II algorithm [1]). The main focus is on Ericsson Digital BSS systems, with a special focus on robustness testing. The main purpose of this master thesis is to investigate how automated robustness testing can be done so that the effort of keeping the tests up to date is minimized when the functionality of the application changes. This kind of automation testing is well depended on the structure of the product being tested. In this thesis, the test object was structured in a way, which made the testing method simple for fault revelation and less time-consuming.

    Method: For this approach, a meta-heuristic search-based genetic algorithm is used to make efficiency for robustness testing of the web application. In order to evaluate the effectiveness of this proposed approach, the experimental procedure is adapted. For this, an experimental testbed is set up. The effectiveness of the proposed approach is measured by two objectives: Fault revelation, Test sequence length. The effectiveness is also measured by evaluating the feasible cost-effective output test cases. i Results:The results we collected from our approach shows that by reducing the test sequence length we can reduce the time consuming and by using the NSGA-2 algorithm we found as many faults as we can when we tested on web applications in Ericsson.

    Conclusion: The attempt of testing of web applications, was partly succeeded. This kind of robustness testing in our approach was strongly depended on the algorithm we are using. We can conclude that by using these two objectives, we can reduce the cost of testing and time consuming.

  • 38.
    Gustafsson, Jacob
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Törnkvist, Adam
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Secure handling of encryption keys for small businesses: A comparative study of key management systems2019Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Background: A recent study shows that key management in the cooperate world is very painful due to, among other reasons, a lack of knowledge and resources. Instead, some companies embed the encryption keys and other software secrets directly in the source code for the application that uses them, introducing the risk of exposing the secrets. Today, there are multiple systems for managing keys. However, it can be hard to pick a suitable one.

    Objectives: The objectives of the thesis are to identify available key management systems for securing secrets in software, evaluate their eligibility to be used by small businesses based on various attributes and recommend a best practice to configure the most suited system for managing software secrets.

    Methods: Key management systems are identified through an extensive search, using both scientific and non-scientific search engines. Identified key management systems were compared against a set of requirements created from a small business perspective. The systems that fulfilled the requirements were implemented and comprehensively evaluated through SWOT analyses based on various attributes. Each system was then scored and compared against each other based on these attributes. Lastly, a best practice guide for the most suitable key management system was established.

    Results: During the thesis, a total of 54 key management systems were identified with various features and purposes. Out of these 54 systems, five key management systems were comprehensively compared. These were Pinterest Knox, Hashicorp Vault, Square Keywhiz, OpenStack Barbican, and Cyberark Conjur. Out of these five, Hachicorp Vault was deemed to be the most suitable system for small businesses.

    Conclusions: There is currently a broad selection of key management systems available. The quality, price, and intended use of these vary, which makes it time-consuming to identify the system that is best suitable based on the needs. The thesis concludes Hachicorp Vault to be the most suitable system based on the needs presented. However, the thesis can also be used by businesses with other needs as a guideline to aid the problem of choosing a key management system.

  • 39.
    Heiding, John
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Increasing Phenotype Diversity In Terrain Generation Using Fourier Transform: Implementation of Fourier transform as an intermediate phenotype for genetic algorithms2019Independent thesis Basic level (degree of Bachelor), 20 credits / 30 HE creditsStudent thesis
  • 40.
    Isenstierna, Tobias
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Popovic, Stefan
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Computer systems in airborne radar: Virtualization and load balancing of nodes2019Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Introduction. For hardware used in radar systems of today, technology is evolving in an increasing rate. For existing software in radar systems, relying on specific drivers or hardware, this quickly becomes a problem. When hardware required is no longer produced or outdated, compatibility problems emerges between the new hardware and existing software. This research will focus on exploring if the virtualization technology can be helpful in solving this problem. Would it be possible to address the compatibility problem with the help of hypervisor solutions, while also maintaining high performance?

    Objectives. The aim with this research is to explore the virtualization technology with focus on hypervisors, to improve the way that hardware and software cooperate within a radar system. The research will investigate if it is possible to solve compatibility problems between new hardware and already existing software, while also analysing the performance of virtual solutions compared to non-virtualized.

    Methods. The proposed method is an experiment were the two hypervisors Xen and KVM will analysed. The hypervisors will be running on two different systems. A native environment with similarities to a radar system will be built and then compared with the same system, but now with hypervisor solutions applied. Research around the area of virtualization will be conducted with focus on security, hypervisor features and compatibility.

    Results. The results will present a proposed virtual environment setup with the hypervisors installed. To address the compatibility issue, an old operating system has been used to prove that implemented virtualization works. Finally performance results are presented for the native environment compared against a virtual environment.

    Conclusions. From results gathered with benchmarks, we can see that the individual performance might vary, which is to be expected when used on different hardware. A virtual setup has been built, including Xen and KVM hypervisors, together with NAS communication. Running an old operating system as a virtual guest, compatibility has been proven to exist between software and hardware using KVM as the virtual solution. From the results gathered, KVM seems like a good solution to investigate more.

  • 41.
    Jerčić, Petar
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Hagelbäck, Johan
    Linnéuniversitetet, SWE.
    Lindley, Craig
    Computational Modelling Group, Data61, CSIRO, AUS.
    An affective serious game for collaboration between humans and robots2019In: Entertainment Computing, ISSN 1875-9521, E-ISSN 1875-953X, Vol. 32, article id 100319Article in journal (Refereed)
    Abstract [en]

    Elicited physiological affect in humans collaborating with their robot partners was investigated to determine its influence on decision-making performance in serious games. A turn-taking version of the Tower of Hanoi game was used, where physiological arousal and valence underlying such human-robot proximate collaboration were investigated. A comparable decision performance in the serious game was found between human and non-humanoid robot arm collaborator conditions, while higher physiological affect was found in humans collaborating with such robot collaborators. It is suggested that serious games which are carefully designed to take into consideration the elicited physiological arousal might witness a better decision-making performance and more positive valence using non-humanoid robot partners instead of human ones. © 2019 The Authors

  • 42.
    Josyula, Sai Prashanth
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Parallel algorithms for real-time railway rescheduling2019Licentiate thesis, comprehensive summary (Other academic)
    Abstract [en]

    In railway traffic systems, it is essential to achieve a high punctuality to satisfy the goals of the involved stakeholders. Thus, whenever disturbances occur, it is important to effectively reschedule trains while considering the perspectives of various stakeholders. The train rescheduling problem is a complex task to solve, both from a practical and a computational perspective. From the latter perspective, a reason for the complexity is that the rescheduling solution(s) of interest may be dispersed across a large solution space. This space needs to be navigated fast while avoiding portions leading to undesirable solutions and exploring portions leading to potentially desirable solutions. The use of parallel computing enables such a fast navigation of the search tree. Though competitive algorithmic approaches for train rescheduling are a widespread topic of research, limited research has been conducted to explore the opportunities and challenges in parallelizing them.

    This thesis presents research studies on how trains can be effectively rescheduled while considering the perspectives of passengers along with that of other stakeholders. Parallel computing is employed, with the aim of advancing knowledge about parallel algorithms for solving the problem under consideration.

    The presented research contributes with parallel algorithms that reschedule a train timetable during disturbances and studies the incorporation of passenger perspectives during rescheduling. Results show that the use of parallel algorithms for train rescheduling improves the speed of solution space navigation and the quality of the obtained solution(s) within the computational time limit.

    This thesis consists of an introduction and overview of the work, followed by four research papers which present: (1) A literature review of studies that propose and apply computational support for train rescheduling with a passenger-oriented objective; (2) A parallel heuristic algorithm to solve the train rescheduling problem on a multi-core parallel architecture; (3) A conflict detection module for train rescheduling, which performs its computations on a graphics processing unit; and (4) A redesigned parallel algorithm that considers multiple objectives while rescheduling.

  • 43.
    Josyula, Sai Prashanth
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Törnquist Krasemann, Johanna
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Lundberg, Lars
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    A parallel algorithm for multi-objective train reschedulingManuscript (preprint) (Other academic)
  • 44.
    Josyula, Sai Prashanth
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Törnquist Krasemann, Johanna
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Lundberg, Lars
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Exploring the Potential of GPU Computing in Train Rescheduling2019In: Proceedings of the 8th International Conference on Railway Operations Modelling and Analysis, Norrköping, 2019., 2019Conference paper (Refereed)
  • 45.
    Kabra, Amit
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Clustering of Driver Data based on Driving Patterns2019Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Data analysis methods are important to analyze the ever-growing enormous quantity of the high dimensional data. Cluster analysis separates or partitions the data into disjoint groups such that data in the same group are similar while data between groups are dissimilar. The focus of this thesis study is to identify natural groups or clusters of drivers using the data which is based on driving style. In finding such a group of drivers, evaluation of the combinations of dimensionality reduction and clustering algorithms is done. The dimensionality reduction algorithms used in this thesis are Principal Component Analysis (PCA) and t-distributed stochastic neighbour embedding (t-SNE). The clustering algorithms such as K-means Clustering and Hierarchical Clustering are selected after performing Literature Review. In this thesis, the evaluation of PCA with K-means, PCA with Hierarchical Clustering, t-SNE with K-means and t-SNE with Hierarchical Clustering is done. The evaluation was done on the Volvo Cars’ drivers dataset based on their driving styles. The dataset is normalized first and Markov Chain of driving styles is calculated. This Markov Chain dataset is of very high dimensions and hence dimensionality reduction algorithms are applied to reduce the dimensions. The reduced dimensions dataset is used as an input to selected clustering algorithms. The combinations of algorithms are evaluated using performance metrics like Silhouette Coefficient, Calinski-Harabasz Index and DaviesBouldin Index. Based on experiment and analysis, the combination of t-SNE and K-means algorithms is found to be the best in comparison to other combinations of algorithms in terms of all performance metrics and is chosen to cluster the drivers based on their driving styles.

  • 46.
    Karlsson, Emelia
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Lidmark, Joel
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Frequency and encryption usage, investigation of the wireless landscape.: A study of access points in Karlskrona2019Independent thesis Basic level (degree of Bachelor), 10 credits / 15 HE creditsStudent thesis
    Abstract [en]

    Background. Wireless connectivity is simple and convenient for the user. This is the reasons why it is predominantly used today for local networks at home. However the potential drawbacks facing this technology is unknown to many of its users. This study is aimed at examining some of these issues in the context of what is used today.Objectives. This study intends to research what types of security features and frequency settings are being used today. It also aims to evaluate what this means in the context of security and usability effecting the user.Methods. The approach of this study is to gather networks in different geographical areas. To do this a Raspberry Pi with an external antenna is used. When the data collection is completed, the networks are broken down into categories. Results. The results show significant frequency overlap on the most commonly used channels. There are vastly more overlap in areas with apartment buildings compared to other residential areas. The results also show that most networks are using secure encryption settings. Conclusions. Careful selection of channels is required to minimise interference, but methods for doing so is specific for each environment. Security wise there are no big concerns except when it comes to password selection.

  • 47.
    Klotins, Eriks
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Boeva, Veselka
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Wnuk, Krzysztof
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Unterkalmsteiner, Michael
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Gorschek, Tony
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    A collaborative method for identification and prioritization of data sources in MDREManuscript (preprint) (Other academic)
    Abstract [en]

    Requirements engineering (RE) literature acknowledges the importance of stakeholder identification early in the software engineering activities. However, literature overlooks the challenge of identifying and selecting the right stakeholders and the potential of using other inanimate requirements sources for RE activities for market-driven products.

    Market-driven products are influenced by a large number of stakeholders. Consulting all stakeholders directly is impractical, and companies utilize indirect data sources, e.g. documents and representatives of larger groups of stakeholders. However, without a systematic approach, companies often use easy to access or hard to ignore data sources for RE activities. As a consequence, companies waste resources on collecting irrelevant data or develop the product based on the input from a few sources, thus missing market opportunities.

    We propose a collaborative and structured method to support analysts in the identification and selection of the most relevant data sources for market-driven product engineering. The method consists of four steps and aims to build consensus between different perspectives in an organization and facilitates the identification of most relevant data sources. We demonstrate the use of the method with two industrial case studies.

    Our results show that the method can support market-driven requirements engineering in two ways: (1) by providing systematic steps to identify and prioritize data sources for RE, and (2) by highlighting and resolving discrepancies between different perspectives in an organization.

  • 48.
    Kola, Ramya Sree
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Generation of synthetic plant images using deep learning architecture2019Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Background:

    Generative Adversarial Networks (Goodfellow et al., 2014) (GANs)are the current state of the art machine learning data generating systems. Designed with two neural networks in the initial architecture proposal, generator and discriminator. These neural networks compete in a zero-sum game technique, to generate data having realistic properties inseparable to that of original datasets. GANs have interesting applications in various domains like Image synthesis, 3D object generation in gaming industry, fake music generation(Dong et al.), text to image synthesis and many more. Despite having a widespread application domains, GANs are popular for image data synthesis. Various architectures have been developed for image synthesis evolving from fuzzy images of digits to photorealistic images.

    Objectives:

    In this research work, we study various literature on different GAN architectures. To understand significant works done essentially to improve the GAN architectures. The primary objective of this research work is synthesis of plant images using Style GAN (Karras, Laine and Aila, 2018) variant of GAN using style transfer. The research also focuses on identifying various machine learning performance evaluation metrics that can be used to measure Style GAN model for the generated image datasets.

    Methods:

    A mixed method approach is used in this research. We review various literature work on GANs and elaborate in detail how each GAN networks are designed and how they evolved over the base architecture. We then study the style GAN (Karras, Laine and Aila, 2018a) design details. We then study related literature works on GAN model performance evaluation and measure the quality of generated image datasets. We conduct an experiment to implement the Style based GAN on leaf dataset(Kumar et al., 2012) to generate leaf images that are similar to the ground truth. We describe in detail various steps in the experiment like data collection, preprocessing, training and configuration. Also, we evaluate the performance of Style GAN training model on the leaf dataset.

    Results:

    We present the results of literature review and the conducted experiment to address the research questions. We review and elaborate various GAN architecture and their key contributions. We also review numerous qualitative and quantitative evaluation metrics to measure the performance of a GAN architecture. We then present the generated synthetic data samples from the Style based GAN learning model at various training GPU hours and the latest synthetic data sample after training for around ~8 GPU days on leafsnap dataset (Kumar et al., 2012). The results we present have a decent quality to expand the dataset for most of the tested samples. We then visualize the model performance by tensorboard graphs and an overall computational graph for the learning model. We calculate the Fréchet Inception Distance score for our leaf Style GAN and is observed to be 26.4268 (the lower the better).

    Conclusion:

    We conclude the research work with an overall review of sections in the paper. The generated fake samples are much similar to the input ground truth and appear to be convincingly realistic for a human visual judgement. However, the calculated FID score to measure the performance of the leaf StyleGAN accumulates a large value compared to that of Style GANs original celebrity HD faces image data set. We attempted to analyze the reasons for this large score.

  • 49.
    Kondepati, Divya Naga Krishna
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Mallidi, Satish Kumar Reddy
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Performance Testing and Assessment of Various Network-Based Applications2019Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Performance Testing is one of the crucial parts of any software cycle process. In today’s world, there is any number of network-based applications. Manual Testing and Automated Testing are the two important ways to test any type of application. For Manual Testing a mobile application known as BlekingeTrafiken is used. For Automated Testing, a web application known as Edmodo is used. Selenium is the automated tool included for automated testing. But, for each application, there are several users and because of that, there might be a decrease in performance of the application as an increase in the number of users. Performance of an application also depends on response times, mean, stability, speed, capacity, accuracy. The performance also depends on the device (memory consumption, battery, software variation) and Server/API (less no of calls) and depends on the network performance (jitters, packet loss, network speed). There are several tools for performance testing. By using these tools, we can get accurate performance results of each request. 

    In this thesis, we performed manual testing of a mobile application by increasing the number of users under similar network conditions, automated testing of a web application under various test cases and tested the performance of an iPad application (PLANETJAKTEN). It is a real-time gaming application used to learn mathematics for children. Apache JMeter is the tool used for performance testing. The interaction between the JMeter tool and the iPad is done through HTTP Proxy method. When any user starts using the application, we can measure the performance of each request sent by the user. Nagios is the tool used to monitor the various environments. Results show that for manual testing, the time taken for connecting to WI-FI is low compared to opening and using the application. For automated testing, it is found that the time taken to run each test case for the first time is high compared to the remaining trials. For performance testing, the experimental results show that the error percentage (the percentage of failed requests) is high for logging into the application compared to using the application. 

  • 50.
    Korsbakke, Andreas
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Ringsell, Robin
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Promestra Security compared with other random number generators2019Independent thesis Basic level (degree of Bachelor), 10 credits / 15 HE creditsStudent thesis
    Abstract [en]

    Background. Being able to trust cryptographic algorithms is a crucial part of society today, because of all the information that is gathered by companies all over the world. With this thesis, we want to help both Promestra AB and potential future customers to evaluate if you can trust their random number generator.

    Objectives. The main objective for the study is to compare the random number generator in Promestra security with the help of the test suite made by the NationalInstitute of Standards and Technology. The comparison will be made with other random number generators such as Mersenne Twister, Blum-Blum-Schub and more.

    Methods. The selected method in this study was to gather a total of 100 million bits of each random number generator and use these in the National Institute ofStandards and Technology test suite for 100 tests to get a fair evaluation of the algorithms. The test suite provides a statistical summary which was then analyzed.

    Results. The results show how many iterations out of 100 that have passed and also the distribution between the results. The obtained results show that there are some random number generators that have been tested that clearly struggles in many of the tests. It also shows that half of the tested generators passed all of the tests.

    Conclusions. Promestra security and Blum-Blum-Schub is close to passing all the tests, but in the end, they cannot be considered to be the preferable random number generator. The five that passed and seem to have no clear limitations are:Random.org, Micali-Schnorr, Linear-Congruential, CryptGenRandom, and MersenneTwister.

12 1 - 50 of 92
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf