Change search
Refine search result
12 51 - 97 of 97
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Rows per page
  • 5
  • 10
  • 20
  • 50
  • 100
  • 250
Sort
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
Select
The maximal number of hits you can export is 250. When you want to export more records please use the Create feeds function.
  • 51.
    Korsbakke, Andreas
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Ringsell, Robin
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Promestra Security compared with other random number generators2019Independent thesis Basic level (degree of Bachelor), 10 credits / 15 HE creditsStudent thesis
    Abstract [en]

    Background. Being able to trust cryptographic algorithms is a crucial part of society today, because of all the information that is gathered by companies all over the world. With this thesis, we want to help both Promestra AB and potential future customers to evaluate if you can trust their random number generator.

    Objectives. The main objective for the study is to compare the random number generator in Promestra security with the help of the test suite made by the NationalInstitute of Standards and Technology. The comparison will be made with other random number generators such as Mersenne Twister, Blum-Blum-Schub and more.

    Methods. The selected method in this study was to gather a total of 100 million bits of each random number generator and use these in the National Institute ofStandards and Technology test suite for 100 tests to get a fair evaluation of the algorithms. The test suite provides a statistical summary which was then analyzed.

    Results. The results show how many iterations out of 100 that have passed and also the distribution between the results. The obtained results show that there are some random number generators that have been tested that clearly struggles in many of the tests. It also shows that half of the tested generators passed all of the tests.

    Conclusions. Promestra security and Blum-Blum-Schub is close to passing all the tests, but in the end, they cannot be considered to be the preferable random number generator. The five that passed and seem to have no clear limitations are:Random.org, Micali-Schnorr, Linear-Congruential, CryptGenRandom, and MersenneTwister.

  • 52.
    Krantz, Amandus
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Cluster-based Sample Selection for Document Image Binarization2019Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    The current state-of-the-art, in terms of performance, for solving document image binarization is training artificial neural networks on pre-labelled ground truth data. As such, it faces the same issues as other, more conventional, classification problems; requiring a large amount of training data. However, unlike those conventional classification problems, document image binarization involves having to either manually craft or estimate the binarized ground truth data, which can be error-prone and time-consuming. This is where sample selection, the act of selecting training samples based on some method or metric, might help. By reducing the size of the training dataset in such a way that the binarization performance is not impacted, the required time spent creating the ground truth is also reduced. This thesis proposes a cluster-based sample selection method, based on previous work, that uses image similarity metrics and the relative neighbourhood graph to reduce the underlying redundancy of the dataset. The method is implemented with different clustering methods and similarity metrics for comparison, with the best implementation being based on affinity propagation and the structural similarity index. This implementation manages to reduce the training dataset by 46\% while maintaining a performance that is equal to that of the complete dataset. The performance of this method is shown to not be significantly different from randomly selecting the same number of samples. However, due to limitations in the random method, such as unpredictable performance and uncertainty in how many samples to select, the use of sample selection in document image binarization still shows great promise.

  • 53.
    Lang, Martin
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Secure Automotive Ethernet: Balancing Security and Safety in Time Sensitive Systems2019Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Background.As a result of the digital era, vehicles are being digitalised in arapid pace. Autonomous vehicles and powerful infotainment systems are justparts of what is evolving within the vehicles. These systems require more in-formation to be transferred within the vehicle networks. As a solution for this,Ethernet was suggested. However, Ethernet is a ’best effort’ protocol which cannot be considered reliable. To solve that issue, specific implementations weredone to create Automotive Ethernet. However, the out-of-the-box vulnerabil-ities from Ethernet persist and need to be mitigated in a way that is suitableto the automotive domain.

    Objectives.This thesis investigates the vulnerabilities of Ethernet out-of-the-box and identify which vulnerabilities cause the largest threat in regard tothe safety of human lives and property. When such vulnerabilities are iden-tified, possible mitigation methods using security measures are investigated.Once two security measures are selected, an experiment is conducted to see ifthose can manage the latency requirements.

    Methods.To achieve the goals of this thesis, literature studies were conductedto learn of any vulnerabilities and possible mitigation. Then, those results areused in an OMNeT++experiment making it possible to record latency in a sim-ple automotive topology and then add the selected security measures to get atotal latency. This latency must be less than 10 ms to be considered safe in cars.

    Results. In the simulation, the baseline communication is found to take1.14957±0.02053 ms. When adding a security measure latency, the total dura-tion is found. For Hash-based Message Authentication Code (HMAC)-SecureHash Algorithm (SHA)-512 the total duration is 1.192274 ms using the up-per confidence interval. Elliptic Curve Digital Signature Algorithm (ECDSA)- ED25519 has the total latency of 3.108424 ms using the upper confidenceinterval.

    Conclusions. According to the results, both HMAC-SHA-512 and ECDSA- ED25519 are valid choices to implement as a integrity and authenticity secu-rity measure. However, these results are based on a simulation and should beverified using physical hardware to ensure that these measures are valid.

  • 54.
    Li, Bing
    et al.
    Kunming Univ, CHN.
    Cheng, Wei
    Kunming Univ, CHN.
    Bie, Yiming
    Harbin Inst Technol, CHN.
    Sun, Bin
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Capacity of Advance Right-Turn Motorized Vehicles at Signalized Intersections for Mixed Traffic Conditions2019In: Mathematical problems in engineering (Print), ISSN 1024-123X, E-ISSN 1563-5147, article id 3854604Article in journal (Refereed)
    Abstract [en]

    Right-turn motorized vehicles turn right using channelized islands, which are used to improve the capacity of intersections. For ease of description, these kinds of right-turn motorized vehicles are called advance right-turn motorized vehicles (ARTMVs) in this paper. The authors analyzed four aspects of traffic conflict involving ARTMVs with other forms of traffic flow. A capacity model of ARTMVs is presented here using shockwave theory and gap acceptance theory. The proposed capacity model was validated by comparison to the results of the observations based on data collected at a single intersection with channelized islands in Kunming, the Highway Capacity Manual (HCM) model and the VISSIM simulation model. To facilitate engineering applications, the relationship describing the capacity of the ARTMVs with reference to the distance between the conflict zone and the stop line and the relationship describing the capacity of the ARTMVs with reference to the effective red time of the nonmotorized vehicles moving in the same direction were analyzed. The authors compared these results to the capacity of no advance right-turn motorized vehicles (NARTMVs). The results show that the capacity of the ARTMVs is more sensitive to the changes in the arrival rate of nonmotorized vehicles when the arrival rate of the nonmotorized vehicles is 500(veh/h)similar to 2000(veh/h) than when the arrival rate is some other value. In addition, the capacity of NARTMVs is greater than the capacity of ARTMVs when the nonmotorized vehicles have a higher arrival rate.

  • 55.
    Lundberg, Lars
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Lennerstad, Håkan
    Blekinge Institute of Technology, Faculty of Engineering, Department of Mathematics and Natural Sciences.
    Boeva, Veselka
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    García Martín, Eva
    Handling non-linear relations in support vector machines through hyperplane folding2019In: ACM International Conference Proceeding Series, Association for Computing Machinery , 2019, p. 137-141Conference paper (Refereed)
    Abstract [en]

    We present a new method, called hyperplane folding, that increases the margin in Support Vector Machines (SVMs). Based on the location of the support vectors, the method splits the dataset into two parts, rotates one part of the dataset and then merges the two parts again. This procedure increases the margin as long as the margin is smaller than half of the shortest distance between any pair of data points from the two different classes. We provide an algorithm for the general case with n-dimensional data points. A small experiment with three folding iterations on 3-dimensional data points with non-linear relations shows that the margin does indeed increase and that the accuracy improves with a larger margin. The method can use any standard SVM implementation plus some basic manipulation of the data points, i.e., splitting, rotating and merging. Hyperplane folding also increases the interpretability of the data. © 2019 Association for Computing Machinery.

  • 56.
    Luro, Francisco Lopez
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Sundstedt, Veronica
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    A comparative study of eye tracking and hand controller for aiming tasks in virtual reality2019In: Eye Tracking Research and Applications Symposium (ETRA), Association for Computing Machinery , 2019Conference paper (Refereed)
    Abstract [en]

    Aiming is key for virtual reality (VR) interaction, and it is often done using VR controllers. Recent eye-tracking integrations in commercial VR head-mounted displays (HMDs) call for further research on usability and performance aspects to better determine possibilities and limitations. This paper presents a user study exploring gaze aiming in VR compared to a traditional controller in an “aim and shoot” task. Different speeds of targets and trajectories were studied. Qualitative data was gathered using the system usability scale (SUS) and cognitive load (NASA TLX) questionnaires. Results show a lower perceived cognitive load using gaze aiming and on par usability scale. Gaze aiming produced on par task duration but lower accuracy on most conditions. Lastly, the trajectory of the target significantly affected the orientation of the HMD in relation to the target’s location. The results show potential using gaze aiming in VR and motivate further research. © 2019 Copyright held by the owner/author(s). Publication rights licensed to ACM.

  • 57.
    Maksimov, Yulian
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    AI Competition: Textual Tree2019Other (Refereed)
  • 58.
    Maksimov, Yulian
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Consortium Project: Textual Tree2019Other (Refereed)
  • 59.
    Maksimov, Yuliyan
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science. FHNW University of Applied Sciences and Arts Northwestern Switzerland, CHE.
    Fricker, Samuel
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Tutschku, Kurt
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Artifact Compatibility for Enabling Collaboration in the Artificial Intelligence Ecosystem2018In: Lecture Notes in Business Information Processing, Springer, 2018, Vol. 336, p. 56-71Conference paper (Refereed)
    Abstract [en]

    Different types of software components and data have to be combined to solve an artificial intelligence challenge. An emerging marketplace for these components will allow for their exchange and distribution. To facilitate and boost the collaboration on the marketplace a solution for finding compatible artifacts is needed. We propose a concept to define compatibility on such a marketplace and suggest appropriate scenarios on how users can interact with it to support the different types of required compatibility. We also propose an initial architecture that derives from and implements the compatibility principles and makes the scenarios feasible. We matured our concept in focus group workshops and interviews with potential marketplace users from industry and academia. The results demonstrate the applicability of the concept in a real-world scenario.

  • 60.
    Manda, Kundan Reddy
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Sentiment Analysis of Twitter Data Using Machine Learning and Deep Learning Methods2019Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Background: Twitter, Facebook, WordPress, etc. act as the major sources of information exchange in today's world. The tweets on Twitter are mainly based on the public opinion on a product, event or topic and thus contains large volumes of unprocessed data. Synthesis and Analysis of this data is very important and difficult due to the size of the dataset. Sentiment analysis is chosen as the apt method to analyse this data as this method does not go through all the tweets but rather relates to the sentiments of these tweets in terms of positive, negative and neutral opinions. Sentiment Analysis is normally performed in 3 ways namely Machine learning-based approach, Sentiment lexicon-based approach, and Hybrid approach. The Machine learning based approach uses machine learning algorithms and deep learning algorithms for analysing the data, whereas the sentiment lexicon-based approach uses lexicons in analysing the data and they contain vocabulary of positive and negative words. The Hybrid approach uses a combination of both Machine learning and sentiment lexicon approach for classification.

    Objectives: The primary objectives of this research are: To identify the algorithms and metrics for evaluating the performance of Machine Learning Classifiers. To compare the metrics from the identified algorithms depending on the size of the dataset that affects the performance of the best-suited algorithm for sentiment analysis.

    Method: The method chosen to address the research questions is Experiment. Through which the identified algorithms are evaluated with the selected metrics.

    Results: The identified machine learning algorithms are Naïve Bayes, Random Forest, XGBoost and the deep learning algorithm is CNN-LSTM. The algorithms are evaluated with respect to the metrics namely precision, accuracy, F1 score, recall and compared. CNN-LSTM model is best suited for sentiment analysis on twitter data with respect to the selected size of the dataset.

    Conclusion: Through the analysis of results, the aim of this research is achieved in identifying the best-suited algorithm for sentiment analysis on twitter data with respect to the selected dataset. CNN-LSTM model results in having the highest accuracy of 88% among the selected algorithms for the sentiment analysis of Twitter data with respect to the selected dataset.

  • 61.
    Marakani, Sumeesha
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Employee Matching Using Machine Learning Methods2019Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Background: Expertise retrieval is an information retrieval technique that focuses on techniques to identify the most suitable ’expert’ for a task from a list of individuals.

    Objectives: This master thesis is a collaboration with Volvo Cars to attempt applying this concept and match employees based on information that was extracted from an internal tool of the company. In this tool, the employees describe themselves in free-flowing text. This text is extracted from the tool and analyzed using Natural Language Processing (NLP) techniques.

    Methods: Through the course of this project, various techniques are employed and experimented with to study, analyze and understand the unlabelled textual data using NLP techniques. Through the course of the project, we try to match individuals based on information extracted from these techniques using Unsupervised MachineLearning methods (K-means clustering).Results. The results obtained from applying the various NLP techniques are explained along with the algorithms that are implemented. Inferences deduced about the properties of the data and methodologies are discussed.

    Conclusions: The results obtained from this project have shown that it is possible to extract patterns among people based on free-text data written about them. The future aim is to incorporate the semantic relationship between the words to be able to identify people who are similar and dissimilar based on the data they share about themselves.

  • 62.
    Midigudla, Dhananjay
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Performance Analysis of the Impact of Vertical Scaling on Application Containerized with Docker: Kubernetes on Amazon Web Services - EC22019Independent thesis Advanced level (degree of Master (Two Years)), 80 credits / 120 HE creditsStudent thesis
    Abstract [en]

    Containers are being used widely as a base technology to pack applications and microservice architecture is gaining popularity to deploy large scale applications, with containers running different aspects of the application. Due to the presence of dynamic load on the service, a need to scale up or scale down compute resources to the containerized applications arises in order to maintain the performance of the application. Objectives To evaluate the impact of vertical scaling on the performance of a containerized application deployed with Docker container and Kubernetes that includes identification of the performance metrics that are mostly affected and hence characterize the eventual negative effect of vertical scaling. Method Literature study on kubernetes and docker containers followed by proposing a vertical scaling solution that can add or remove compute resources like cpu and memory to the containerized application. Results and Conclusions Latency and connect times were the analyzed performance metrics of the containerized application. From the obtained results, it was concluded that vertical scaling has no significant impact on the performance of a containerized application in terms of latency and connect times.

  • 63.
    Navarro, Diego
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Sundstedt, Veronica
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Evaluating player performance and experience in virtual reality game interactions using the htc vive controller and leap motion sensor2019In: VISIGRAPP 2019 - Proceedings of the 14th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications, SciTePress , 2019, p. 103-110Conference paper (Refereed)
    Abstract [en]

    An important aspect of virtual reality (VR) interfaces are novel natural user interactions (NUIs). The increased use of VR games requires the evaluation of novel interaction techniques that allow efficient manipulations of 3D elements using the hands of the player. Examples of VR devices that support these interactions include the HTC Vive controller and the Leap Motion sensor. This paper presents a quantitative and qualitative evaluation of player performance and experience in a controlled experiment with 20 volunteering participants. The experiment evaluated the HTC Vive controller and the Leap Motion sensor when manipulating 3D objects in two VR games. The first game was a Pentomino puzzle and the second game consisted of a ball-throwing task. Four interaction techniques (picking up, dropping, rotating, and throwing objects) were evaluated as part of the experiment. The number of user interactions with the Pentomino pieces, the number of ball throws, and game completion time were metrics used to analyze the player performance. A questionnaire was also used to evaluate the player experience regarding enjoyment, ease of use, sense of control and user preference. The overall results show that there was a significant decrease in player performance when using the Leap Motion sensor for the VR game tasks. Participants also reported that hand gestures with the Leap Motion sensor were not as reliable as the HTC Vive controller. However, the survey showed positive responses when using both technologies. The paper also offers ideas to keep exploring the capabilities of NUI techniques in the future. Copyright © 2019 by SCITEPRESS – Science and Technology Publications, Lda. All rights reserved

  • 64.
    Nilsson, Henrik
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Gaze-based JPEG compression with varying quality factors2019Independent thesis Basic level (degree of Bachelor), 10 credits / 15 HE creditsStudent thesis
    Abstract [en]

    Background: With the rise of streaming services such as cloud gaming, a fast internet speed is required for the overall experience. The average internet connection is not suited for the requirements that cloud gaming require. A high quality and frame rate is important for the experience. A solution to this problem would be to have parts where the user is looking at in a image be displayed in higher quality compared to the rest of the image.

    Objectives: The objective of this thesis is to create a gaze-based lossy image compression algorithm that reduces quality where the user is not looking. By using different radial functions to determine the quality decrease, the perceptual quality is compared to traditional JPEG compression. The storage difference when using a gaze-based lossy image compression is also compared to the JPEG algorithm.

    Methods: A gaze-based image compression algorithm, which is based on the JPEG algorithm, is developed with DirectX 12. The algorithm uses Tobii eye tracker to get where the user is gazing at the screen. When the gaze-position is changed the algorithm is run again to compress the image. A user study is conducted to the test the perceived quality of this algorithm compared to traditional lossy JPEG image compression. Two different radial functions are tested with various parameters to determine which one is offering the best perceived quality. The algorithm is also tested along with the radial functions on how much of a storage difference there is when using this algorithm compared to traditional JPEG compression.

    Results: With 11 participants, the results show the gaze-based algorithm is perceptually the same on images that have few objects who are close together. Images with many objects that are spread throughout the image performed worse on the gaze-based algorithm and was less picked compared traditional JPEG compression. The radial functions that cover much of the screen is more often picked compared to other radial functions that have less area of the screen. The storage difference between the gaze-based algorithm compared to traditional JPEG compression was between 60% to 80% less depending on the image.

    Conclusions: The thesis concludes that there is substantial storage savings that can be made when using a gaze-based image compression compared to traditional JPEG compression. Images with few objects who are close together are perceptually not distinguishable when using the gaze-based algorithm.

  • 65.
    Niyizamwiyitira, Christine
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Lundberg, Lars
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    A utilization-based schedulability test of real-time systems running on a multiprocessor virtual machine2019In: Computer journal, ISSN 0010-4620, E-ISSN 1460-2067, Vol. 62, no 6, p. 884-904, article id bxy005Article in journal (Refereed)
    Abstract [en]

    We consider a real-time application that executes in a VM with multiple virtual cores. Tasks are scheduled globally using fixed-priority scheduling. In order to avoid Dhalls effect, we classify tasks into two priority classes: heavy and light. Heavy tasks have higher priority than light tasks. For light tasks we use rate monotonic priority assignment. We propose a utilization-based schedulability test. If the task set is schedulable, we provide an assignment of priorities to tasks. The input to the test is the task set, the number of cores in the VM, the period, deadline and blocking time for the VM. We evaluate how jitter, when scheduling VMs on the hypervisor level, affects the schedulability of the real-time tasks running in the VM. The schedulability of the real-time tasks in the VM decreases when the hypervisor jitter increases, but on the other hand the schedulability on the hypervisor level increases if we allow more jitter, i.e. there is a trade-off. Our results make it possible to evaluate this trade-off and take informed decisions when selecting scheduling parameters on the hypervisor level. Simulations show that the priority assignment used by our algorithm schedules more task sets than using rate monotonic priority assignment. © 2019 The British Computer Society. All rights reserved.

  • 66.
    Nordahl, Christian
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Boeva, Veselka
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Grahn, Håkan
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Netz Persson, Marie
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Monitoring Household Electricity Consumption Behaviour for Mining Changes2019Conference paper (Refereed)
    Abstract [en]

    In this paper, we present an ongoing work on using a household electricity consumption behavior model for recognizing changes in sleep patterns. The work is inspired by recent studies in neuroscience revealing an association between dementia and sleep disorders and more particularly, supporting the hypothesis that insomnia may be a predictor for dementia in older adults. Our approach initially creates a clustering model of normal electricity consumption behavior of the household by using historical data. Then we build a new clustering model on a new set of electricity consumption data collected over a predefined time period and compare the existing model with the built new electricity consumption behavior model. If a discrepancy between the two clustering models is discovered a further analysis of the current electricity consumption behavior is conducted in order to investigate whether this discrepancy is associated with alterations in the resident’s sleep patterns. The approach is studied and initially evaluated on electricity consumption data collected from a single randomly selected anonymous household. The obtained results show that our approach is robust to mining changes in the resident daily routines by monitoring and analyzing their electricity consumption behavior model.

  • 67.
    Nordahl, Christian
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Boeva, Veselka
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Grahn, Håkan
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Netz Persson, Marie
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Profiling of household residents’ electricity consumption behavior using clustering analysis2019In: Lect. Notes Comput. Sci., Springer Verlag , 2019, p. 779-786Conference paper (Refereed)
    Abstract [en]

    In this study we apply clustering techniques for analyzing and understanding households’ electricity consumption data. The knowledge extracted by this analysis is used to create a model of normal electricity consumption behavior for each particular household. Initially, the household’s electricity consumption data are partitioned into a number of clusters with similar daily electricity consumption profiles. The centroids of the generated clusters can be considered as representative signatures of a household’s electricity consumption behavior. The proposed approach is evaluated by conducting a number of experiments on electricity consumption data of ten selected households. The obtained results show that the proposed approach is suitable for data organizing and understanding, and can be applied for modeling electricity consumption behavior on a household level. © Springer Nature Switzerland AG 2019.

  • 68.
    Nässén, Olle
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Leiborn, Edvard
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Real-time Terrain Deformation with Isosurface Algorithms2019Independent thesis Basic level (degree of Bachelor), 10 credits / 15 HE creditsStudent thesis
    Abstract [en]

    Background. Being able to modify virtual environments can create immersive experiences for video-game players. Storing data as volumetric scalar fields allows for highly modifiable 3D environments that can be converted into GPU-friendly triangles with isosurface algorithms. Using scalar fields and isosurface algorithms can be more computationally expensive and require more data than the more commonly used polygonal models.

    Objectives. The aim of this thesis is to explore solutions to modifying real-time 3D environments with isosurface algorithms. This will be done in two parts. First in terms of observing how modern games deal with storing scalar fields, researching which isosurface algorithms are being used and how they are being used in games. The second part is to create an application and limit the data storage required while still running at a real-time speed.

    Methods. There are two methods to achieve the aim. The first is to research and see which data structures and isosurface algorithms are being used in modern games and how they are utilized. The second method will be done by implementation. The implementation will use the GPU through compute shaders and use marching cubes as isosurface algorithm. It will utilize Christopher Dyken’s Histogram Pyramids for stream compaction. Two different versions will be implemented that differ in terms of what data types will be used for storage. The first using the data type char and the second int. Between these two versions, the runtime speed will be measured and compared on two different hardware configurations.

    Results. Finding good data on what algorithms games use is difficult. Modern games are using scalar fields in many different ways: Some allow almost complete modification of terrain, others only use it for a 3D environment. For data storage, octrees and chunks are two common ways to store the fields. Dual Contouring appears to be the primary isosurface algorithm being used based on the researched games. The results of the implementation were very fast and usable in real time environments for destruction of terrain on a large scale. The less storage intensive variation of this implementation(char) gave faster results on modern hardware but the opposite(int) was true on older hardware.

    Conclusions. Modifying scalar field terrain is done at a very large scale in modern games. The choice of using Dual Contouring or Marching Cubes depends on the use-case. For areas where sharp features can be important Dual Contouring is the preferred choice. Likely for these reasons Dual Contouring was found to be a popular choice in the studied games. For other areas, like many types of terrain, Marching Cubes is very fast, as can be seen in the implementation. By using the char version of the implementation, interacting with the environment in real-time is possible at high frame-rates.

  • 69.
    Olsson, Fredrik
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Nyqvist, Magnus
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Automated generation of waypoints: for pathfinding in a static environment2019Independent thesis Basic level (degree of Bachelor), 10 credits / 15 HE creditsStudent thesis
    Abstract [en]

    Video game characters must almost always be able to travel from point A to point B and this task can be solved in various ways. There exist grid maps, waypoints, mesh navigation and hierarchical techniques to solve this problem. On randomly generated terrain we make use of automatically generated waypoints to solve pathfinding queries. The waypoints are connected by edges to create a waypoint graph and the graph can be used in real time pathfinding for a single agent in a static environment. This is done by finding the vertices of the blocked triangles from the terrain and place a waypoint on each. We make use of the GPU to create the waypoint graph. The waypoints are connected by utilizing a serialized GPU quad tree to find the relevant blocked geometry to do a line-triangle intersection test. If a line between two waypoints do not intersect any blocked geometry the connection is valid and stored. We found out that it is possible to generate a waypoint graph during the startup of the game with acceptable time results, make use of such a graph for real time pathfinding and that players perceive the paths generated by the algorithm as realistic and responsive. Our conclusion is that our solution is well suited for a deterministic environment with agents of the same size.

  • 70.
    Olsson, Victor
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Eklund, Viktor
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    CPU Performance Evaluation for 2D Voronoi Tessellation2019Independent thesis Basic level (degree of Bachelor), 10 credits / 15 HE creditsStudent thesis
    Abstract [en]

    Voronoi tessellation can be used within a couple of different fields. Some of these fields include healthcare, construction and urban planning. Since Voronoi tessellations are used in multiple fields, it is motivated to know the strengths and weaknesses of the algorithms used to generate them, in terms of their efficiency.

    The objectives of this thesis are to compare two CPU algorithm implementations for Voronoi tessellation in regards to execution time and see which of the two is the most efficient. The algorithms compared are The Bowyer-Watson algorithm and Fortunes algorithm.

    The Fortunes algorithm used in the research is based upon a pre-existing Fortunes implementation while the Bowyer-Watson implementation was specifically made for this research. Their differences in efficiency were determined by measuring their execution times and comparing them. This was done in an iterative manner, where for each iteration, the amount of data to be computed was continuously increased.

    The results show that Fortunes algorithm is more efficient on the CPU without using any acceleration techniques for any of the algorithms. It took 70 milliseconds for the Bowyer-Watson method to calculate 3000 input points while Fortunes method took 12 milliseconds under the same conditions.

    As a conclusion, Fortunes algorithm was more efficient due to the Bowyer-Watson algorithm doing unnecessary calculations. These calculations include checking all the triangles for every new point added. A suggestion for improving the speed of this algorithm would be to use a nearest neighbour search technique when searching through triangles.

  • 71.
    Paladi, Nicolae
    et al.
    Lund University and RISE.
    Svenningsson, Jakob
    RISE.
    Medina, Jorge
    New Jersey Institute of Technology.
    Arlos, Patrik
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Protecting OpenFlow Flow Tables with Intel SGX2019In: Proceedings of the ACM SIGCOMM 2019 Conference Posters and Demos, Beijing: ACM Publications, 2019, p. 146-147Conference paper (Refereed)
  • 72.
    PATTA, SIVA VENKATA PRASAD
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Intelligent Decision Support Systems for Compliance Options: A Systematic Literature Review and Simulation2019Independent thesis Advanced level (degree of Master (Two Years)), 80 credits / 120 HE creditsStudent thesis
    Abstract [en]

    The project revolves around logistics and its adoption to the new rules. Theobjective of this project is to focus on minimizing data tampering to the lowest level possible.To achieve the set goals in this project, Decision support system and simulation havebeen used. However, to get clear insight about how they can be implemented, a systematicliterature review (Case Study incl.) has been conducted, followed by interviews with personnelat Kakinada port to understand the real-time complications in the field. Then, a simulatedexperiment using real-time data from Kakinada port has been conducted to achieve the set goalsand improve the level of transparency on all sides i.e., shipper, port and terminal.

  • 73.
    Peddireddy, Vidyadhar reddy
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Enhancement of Networking Capabilities in P2P OpenStack2019Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    In recent times, there’s been a trend towards setting up smaller clouds at the edge of the network and interconnecting them across multiple sites. In these scenarios, the software used for managing the resources should be flexible enough to scale. Considering OpenStack the most widely used cloud software, It is observed that the compute service has shown performance degradation when the deployment reaches fewer hundreds of nodes. Finding out solutions to address the scalability issue in OpenStack, Ericsson has developed a new architecture that supports massive scalability of OpenStack clouds. However, the challenges with multicloud networking in P2P OpenStack remained unsolved. This thesis work as an extension to Ericsson’s P2P OpenStack project investigates various multi-cloud networking techniques and proposes two decentralized designs for cross Neutron networking in P2P OpenStack. The design-1 is based on OpenStack Tricircle project and design-2 is based on VPNaaS. This thesis work implements VPNaaS design to support the automatic interconnection of Virtual machines that belong to the same user but deployed in different OpenStack clouds. We evaluate this thesis for control plane operation under two different scenarios namely single user case and multiple users cases. In both scenarios, request-response time is chosen as an evaluating parameter. Results show that there is an increase in request-response time when users in the system are increased.

  • 74.
    Peng, Cong
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Good Record Keeping for Conducting Research Ethically CorrectManuscript (preprint) (Other academic)
  • 75.
    Peng, Cong
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    What Can Teachers Do to Make the Group Work Learning Effective: a Literature ReviewManuscript (preprint) (Other academic)
    Abstract [en]

    Group work-based learning is encouraged in higher education on account of both ped-agogical benefits and industrial employers’s requirements. However, although a plenty ofstudies have been performed, there are still various factors that will affect students’ groupwork-based learning in practice. It is important for the teachers to understand which fac-tors are influenceable and what can be done to influence. This paper performs a literaturereview to identify the factors that has been investigated and reported in journal articles. Fif-teen journal articles were found relevant and fifteen factors were identified, which could beinfluenced by instructors directly or indirectly. However, more evidence is needed to sup-port the conclusion of some studies since they were performed only in one single course.Therefore, more studies are required on this topic to investigate the factors in differentsubject areas. 

  • 76.
    Peng, Cong
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Goswami, Prashant
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Meaningful Integration of Data from Heterogeneous Health Services and Home Environment Based on Ontology2019In: Sensors, ISSN 1424-8220, E-ISSN 1424-8220, Vol. 19, no 8, article id 1747Article in journal (Refereed)
    Abstract [en]

    The development of electronic health records, wearable devices, health applications and Internet of Things (IoT)-empowered smart homes is promoting various applications. It also makes health self-management much more feasible, which can partially mitigate one of the challenges that the current healthcare system is facing. Effective and convenient self-management of health requires the collaborative use of health data and home environment data from different services, devices, and even open data on the Web. Although health data interoperability standards including HL7 Fast Healthcare Interoperability Resources (FHIR) and IoT ontology including Semantic Sensor Network (SSN) have been developed and promoted, it is impossible for all the different categories of services to adopt the same standard in the near future. This study presents a method that applies Semantic Web technologies to integrate the health data and home environment data from heterogeneously built services and devices. We propose a Web Ontology Language (OWL)-based integration ontology that models health data from HL7 FHIR standard implemented services, normal Web services and Web of Things (WoT) services and Linked Data together with home environment data from formal ontology-described WoT services. It works on the resource integration layer of the layered integration architecture. An example use case with a prototype implementation shows that the proposed method successfully integrates the health data and home environment data into a resource graph. The integrated data are annotated with semantics and ontological links, which make them machine-understandable and cross-system reusable.

  • 77.
    Pentikäinen, Filip
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Sahlbom, Albin
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Combining Influence Maps and Potential Fields for AI Pathfinding2019Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    This thesis explores the combination of influence maps and potential fields in two novel pathfinding algorithms, IM+PF and IM/PF, that allows AI agents to intelligently navigate an environment. The novel algorithms are compared to two established pathfinding algorithms, A* and A*+PF, in the real-time strategy (RTS) game StarCraft 2.

    The main focus of the thesis is to evaluate the pathfinding capabilities and real-time performance of the novel algorithms in comparison to the established pathfinding algorithms. Based on the results of the evaluation, general use cases of the novel algorithms are presented, as well as an assessment if the novel algorithms can be used in modern games.

    The novel algorithms’ pathfinding capabilities, as well as performance scalability, are compared to established pathfinding algorithms to evaluate the viability of the novel solutions. Several experiments are created, using StarCraft 2’s base game as a benchmarking tool, where various aspects of the algorithms are tested. The creation of influence maps and potential fields in real-time are highly parallelizable, and are therefore done in a GPGPU solution, to accurately assess all algorithms’ real-time performance in a game environment.

    The experiments yield mixed results, showing better pathfinding and scalability performance by the novel algorithms in certain situations. Since the algorithms utilizing potential fields enable agents to inherently avoid and engage units in the environment, they have an advantage in experiments where such qualities are assessed. Similarly, influence maps enable agents to traverse the map more efficiently than simple A*, giving agents inherent advantages.

    In certain use cases, where multiple agents require pathfinding to the same destination, creating a single influence map is more beneficial than generating separate A* paths for each agent. The main benefits of generating the influence map, compared to A*-based solutions, being the lower total compute time, more precise pathfinding and the possibility of pre-calculating the map.

  • 78.
    Pogén, Tobias
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Asynchronous Particle Calculations on Secondary GPU for Real Time Applications2019Independent thesis Basic level (degree of Bachelor), 10 credits / 15 HE creditsStudent thesis
  • 79.
    Qian, Wu
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Cheddad, Abbas
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Segmentation-based Deep Learning Fundus Image Analysis2019Conference paper (Refereed)
  • 80.
    Roth, Robin
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Lundblad, Martin
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    An Evaluation of Machine Learning Approaches for Hierarchical Malware Classification2019Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    With an evermore growing threat of new malware that keeps growing in both number and complexity, the necessity for improvement in automatic detection and classification of malware is increasing. The signature-based approaches used by several Anti-Virus companies struggle with the increasing amount of polymorphic malware. The polymorphic malware change some minor aspects of the code to be able to remain undetected. Malware classification using machine learning have been used to try to solve this issue in previous research. In the proposed work, different hierarchical machine learning approaches are implemented to conduct three experiments. The methods utilise a hierarchical structure in various ways to be able to get a better classification performance. A selection of hierarchical levels and machine learning models are used in the experiments to evaluate how the results are affected.

    A data set is created, containing over 90000 different labelled malware samples. The proposed work also includes the creation of a labelling method that can be helpful for researchers in malware classification that needs labels for a created data set.The feature vector used contains 500 n-gram features and 3521 Import Address Table features. In the experiments for the proposed work, the thesis includes the testing of four machine learning models and three different amount of hierarchical levels. Stratified 5-fold cross validation is used in the proposed work to reduce bias and variance in the results.

    The results from the classification approach shows it achieves the highest hF-score, using Random Forest (RF) as the machine learning model and having four hierarchical levels, which got an hF-score of 0.858228. To be able to compare the proposed work with other related work, pure-flat classification accuracy was generated. The highest generated accuracy score was 0.8512816, which was not the highest compared to other related work.

  • 81.
    Sandnes, Carl
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Gehlin Björnberg, Axel
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Cross-platform performance ofintegrated, internal and external GPUs2019Independent thesis Basic level (degree of Bachelor), 10 credits / 15 HE creditsStudent thesis
    Abstract [en]

    As mobile computers such as laptops and cellphones are becoming more and more powerful, the options for those who traditionally required a more powerful desktop PC, such as video editors or gamers seem to have grown slightly. One of these new options are external Graphics Processing Units (eGPUs). Where a laptop is used along with an external GPU, connected via Intel’s Thunderbolt 3. This is however a rather untested method. This paper discusses the performance of eGPUs in a variety of operating systems (OS’s). For this research, performance benchmarking was used to investigate the performance of GPU intensive tasks in various operating systems. It was possible to determine that the performance across operating systems does indeed differ greatly in some usecases, such as games. While other use cases such as computational and synthetictests perform very similarly independently of which system (OS) is used. It seems that the main limiting factor is the GPU itself. It also appears to be the case that the interface with which the GPU is connected to a computer does indeed impact performance, in a very similar way between different OS’s. Generally, games seem to loose more performance than synthetic and computational tasks when using an externalGPU rather than an internal one. It was also discovered that there are too many variables for any real conclusions to be drawn from the gathered results. This as theresults were sometimes very inconclusive and conflicting. So while the outcomes can be generalized, more research is needed before any definitive conclusions can be made.

  • 82.
    Sharma, Suraj
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Performance comparison of Java and C++ when sorting integers and writing/reading files.2019Independent thesis Basic level (degree of Bachelor), 10 credits / 15 HE creditsStudent thesis
    Abstract [en]

    This study is conducted to show the strengths and weaknesses of C++ and Java in three areas that are used often in programming; loading, sorting and saving data.Performance and scalability are large factors in software development and choosing the right programming language is often a long process.It is important to conduct these types of direct comparison studies to properly identify strengths and weaknesses of programming languages. Two applications were created, one using C++ and one using Java.Apart from a few syntax and necessary differences, both are as close to being identical as possible.Each application loads three files containing 1000, 10000 and 100000 randomly ordered integers.These files are pre-created and always contain the same values.They are randomly generated by another small application before testing. The test runs three times, once for each file.When the data is loaded, it is sorted using quicksort.The data is reset using the dataset file and sorted again using insertion-sort.The sorted data is then saved to a file.Each test runs 50 times in a large loop and the times for loading, sorting and saving the data are saved.In total, 300 tests are run between the C++ and the Java application. The results show that Java has a total time that is faster than C++ and it is also faster when loading two out of three datasets.C++ was generally faster when sorting the datasets using both algorithms and when saving the data to files. In general Java was faster in this study, but when processing the data and when under heavy load, C++ performed better.The main difference was when loading the files.The way that Java loads the data from a file is very different from C++, even though both applications read the files character by character, Java’s “Scanner” library converts data before it parses it.With some optimization, for example by reading the file line by line and then parsing the data, C++ could be comparable or faster, but for the sake of this study, the input methods that were chosen were seemingly the fairest.

  • 83.
    Sobeh, Abedallah
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Exploration of using Blockchaintechnology for forensically acceptableaudit trails with acceptableperformance impacts2019Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    In this work, we will test the possibility to use Blockchain to preserve data suchas logs. Data inside Blockchain is preserved to be used as digital evidence. Thestudy will examine if Blockchain technology will satisfy the requirement for digitalevidence in a Swedish court. The study will simulate different test scenarios. Eachscenario will be tested on three different hardware configurations. The test has twomain categories, stream test and batch test. In stream test, we test performanceimpact on different systems in case each log is sent in a separate block. While inbatch test, we have two categories batch with data and batch without data. In thistest, we simulate sending 80GB of data each day. In total we send 80GB of data,but the difference here is that we change the time between each block and adjustthe size of the block. In our tests, we focused on three metrics: CPU load, networkbandwidth usage and storage consumption for each scenario. After the tests, wecollected the data and compared the results of each hardware configuration withinthe same scenario. It was concluded that Blockchain does not scale up in streammode, and it is limited to ten blocks/s regardless of hardware configuration. On theother hand, Blockchain can manage 80GB of data each day without stressing systemresources.

  • 84.
    Spångberg, Felicia
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Schramm, Eva
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Measuring player preference using muscle simulation2019Independent thesis Basic level (degree of Bachelor), 10 credits / 15 HE creditsStudent thesis
    Abstract [en]

    Background: Simulation of physics is something several modern video games use. These simulations are often used to create more believable and realistic environments. Physics-based animations in the form of muscle simulations is one such technique.

    Objectives: The aim of this thesis is to investigate three different stages of full-body muscle deformation and observe which of these are preferred. One using no degree of deformation being the control condition and the other two using different degrees of deformation being the treatments. This study is conducted by creating animationswith three different degrees of muscle simulation. These animations are then rendered in Maya as well as put into a small fighting scenario implemented in Unreal Engine 4. A user experiment will be conducted where a number of participants will be asked to choose between different scenarios using two-alternative forced choice. After the user study is completed, the data will be analyzed and used to form a conclusion.

    Methods: Implementations needed to create the stimulus was first done in Maya where the meshes, muscles and animations were created. Renders were done in Maya of all animations and a scene was also implemented in Unreal Engine 4 simulating a small fighting game using the assets created in Maya. To evaluate player preference,a user experiment was conducted with 13 participants where each participant was asked to watch 27 scenarios containing two side-by-side comparisons with different degrees of muscle deformation. The user experiment stimulus was created using PshycoPy which also collected the data of user preference. The scenarios where presented in an arbitrary order. The study was held in a room where the participant was undisturbed.

    Results: The results showed that no muscle deformation was preferred in all cases where a statistical difference could be found.

    Conclusions: While the results show that the control condition is mostly preferred, most cases did not yield a conclusive result. Thus further research in the area is necessary.

  • 85.
    Strand, Anton
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Gunnarsson, Markus
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Code Reviewer Recommendation: A Context-Aware Hybrid Approach2019Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Background. Code reviewing is a commonly used practice in software development. It refers to the process of reviewing new code changes, commonly before they aremerged with the code base. However, in order to perform the review, developers need to be assigned to that task. The problems with a manual assignment includes a time-consuming selection process; limited pool of known candidates; risk of high reuse of the same reviewers (high workload).

    Objectives. This thesis aims to attempt to address the above issues with a recommendation system. The idea is to receive feedback from experienced developers in order to expand upon identified reviewer factors; which can be used to determinethe suitability of developers as reviewers for a given change. Also, to develop and implement a solution that uses some of the most promising reviewer factors. The solution can later be deployed and validated through user and reviewer feedback in a real large-scale project. The developed recommendation system is named Carrot.

    Methods. An improvement case study was conducted at Ericsson. The identification of reviewer factors is found through literature review and semi-structured interviews. Validation of Carrot’s usability was conducted through static analysis,user feedback, and static validation.

    Results. The results show that Carrot can help identify adequate non-obvious reviewers and be of great assistance to new developers. There are mixed opinions on Carrot’s ability to assist with workload balancing and decrease of review lead time. The recommendations can be performed in a production environment in less than a quarter of a second.

    Conclusions. The implemented and validated approach indicates possible usefulness in performing recommendations, but could benefit significantly from further improvements. Many of the problems seen with the recommendations seem to be a result of corner-cases that are not handled by the calculations. The problems would benefit considerably from further analysis and testing.

  • 86.
    Tewolde, Vincent
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Comparison of authentication options forMQTT communication in an IoT basedsmart grid solution2019Independent thesis Basic level (degree of Bachelor), 10 credits / 15 HE creditsStudent thesis
    Abstract [en]

    Background. Smart grid is a new technology that focuses on utilising renewable energyalongside the current infrastructure. It aims to contribute to a sustainable future by implementingIoT devices in the electrical grid to adjust electricity flow and increase energyefficiency. By combining the current infrastructure with information technology manysecurity questions arise. This paper focuses on the authentication of the IoT devicesconnected with the MQTT protocol.Objectives. The study aims to discover a preferable MQTT authentication methodadapted for Techinova’s infrastructure with their requirements in consideration.Methods. A literature review was performed to obtain fundamental authenticationmethods and to distinguish different security approaches. Experiments were executed ina test environment to gather detailed information to gain a deeper understanding anddiscover security vulnerabilities.Results. The results derive from three experiments comparing the selected authenticationoptions security flaws.Conclusions. The results suggests that implementing TLS contributes to a secure authenticationand communication between the IoT devices and the broker without delayingthe transmission. However, further research could obtain other relevant data eventuatingin different results.

  • 87.
    Tlatlik, Max Lukas
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Volume rendering with Marching cubes and async compute2019Independent thesis Basic level (degree of Bachelor), 10 credits / 15 HE creditsStudent thesis
    Abstract [en]

    With the addition of the compute shader stage for GPGPU hardware it has becomepossible to run CPU like programs on modern GPU hardware. The greatest benefit can be seen for algorithms that are of highly parallel nature and in the case of volume rendering the Marching cubes algorithm makes for a great candidate due to its simplicity and parallel nature. For this thesis the Marching cubes algorithm was implemented on a compute shader and used in a DirectX 12 framework to determine if GPU frametime performance can be improved by executing the compute command queue parallell to the graphics command queue. Results from performance benchmarks show that a gain is present for each benchmarked configuration and the largest gains are seen for smaller workloads with up to 52%. This information could therefore prove useful for game developers who want to improve framerates or decrease development time but also in other fields such as volume rendering for medical images.

  • 88.
    Tulek, Zerina
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Arnell, Louise
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Facebook Eavesdropping Through the Microphone for Marketing Purpose2019Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Background. As long as Facebook has existed, advertisements have been present in the application in one way or another. The ads have evolved and become more sophisticated over the years. Today, Facebook creates groups with members having specific attributes and advertisers requests groups for whom Facebook shows the advertisement. Besides this, Facebook receives information from other sources such as browser cookies and ad pixels. All information that Facebook receive or collect is used in their algorithms to target relevant advertisement for each user.

    Objectives. To examine the possibility of Facebook eavesdropping through the microphone for marketing purposes and identify eventual keywords mapped between a spoken conversation and advertisement.

    Methods. Five controlled experiments were performed with two test phones and two control phones. These were treated equally beside the test phones being exposed to spoken conversations containing randomly chosen products, companies and brands. The content of the phones was compared to see whether advertisement was adapted to the spoken conversation in the test phones but not in the control phones.

    Results. No sponsored advertisements were present on the Facebook and Instagram application. Messenger contained ads indicating that Facebook might analyse the content of private messages to adapt advertisement. After adding the Wish application to the research, the results were still the same. Other contents in the Facebook news feed were analysed, however, the content analysed did not contain any evidence that Facebook eavesdrops on spoken conversations for marketing purpose.

    Conclusions. The experiments conducted were not sufficient enough to trigger sponsored advertisement. Therefore, no indications were found that Facebook is eavesdropping through the microphone or not.

  • 89.
    Uppströmer, Viktor
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Råberg, Henning
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Detecting Lateral Movement in Microsoft Active Directory Log Files: A supervised machine learning approach2019Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Cyber attacks raise a high threat for companies and organisations worldwide. With the cost of a data breach reaching $3.86million on average, the demand is high fora rapid solution to detect cyber attacks as early as possible. Advanced persistent threats (APT) are sophisticated cyber attacks which have long persistence inside the network. During an APT, the attacker will spread its foothold over the network. This stage, which is one of the most critical steps in an APT, is called lateral movement. The purpose of the thesis is to investigate lateral movement detection with a machine learning approach. Five machine learning algorithms are compared using repeated cross-validation followed statistical testing to determine the best performing algorithm and feature importance. Features used for learning the classifiers are extracted from Active Directory log entries that relate to each other, with a similar workstation, IP, or account name. These features are the basis of a semi-synthetic dataset, which consists of a multiclass classification problem.

    The experiment concludes that all five algorithms perform with an accuracy of 0.998. RF displays the highest f1-score (0.88) and recall (0.858), SVM performs the best with the performance metric precision (0.972), and DT has the lowest computational cost (1237ms). Based on these results, the thesis concludes that the algorithms RF, SVM, and DT perform best in different scenarios. For instance, SVM should be used if a low amount of false positives is favoured. If the general and balanced performance of multiple metrics is preferred, then RF will perform best. The results also conclude that a significant amount of the examined features can be disregarded in future experiments, as they do not impact the performance of either classifier.

  • 90.
    Vishnubhotla, Sai Datta
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Lundberg, Lars
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Supplementary Material of: "Understanding the Perceived Relevance of Capability Measures: A Survey of Agile Software Development Practitioners"2019Other (Other (popular science, discussion, etc.))
    Abstract [en]

    This document contains the supplementary material of the paper titled: "Understanding the Perceived Relevance of Capability Measures: A Survey of Agile Software Development Practitioners" 

  • 91.
    Vishnubhotla, Sai Datta
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Mendes, Emilia
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Lundberg, Lars
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Investigating the relationship between personalities and team climate of software professionals in a telecom companyIn: Information and Software Technology, ISSN 0950-5849, E-ISSN 1873-6025Article in journal (Other academic)
    Abstract [en]

    Context: Previous research found that the performance of a team not only depends on the team personality composition, but also on the interactive effects of team climate. Although investigationon personalities associated with software development has been an active research area over the past decades, there has been very limited research in relation to team climate.

    Objective: Our study investigates the association between the five factor model personality traits(openness to experience, conscientiousness, extraversion, agreeableness and neuroticism) and the factors related to team climate (team vision, participative safety, support for innovation and task orientation) within the context of agile teams working in a telecom company.

    Method: A survey was used to gather data on personality characteristics and team climate perceptions of 43 members from eight agile teams. The data was initially used for correlation analysis; then, regression models were developed for predicting the personality traits related toteam climate perception.

    Results: We observed a statistically significant positive correlation between agreeableness and participative safety (r = 0.37), and also between openness to experience and support for innovation(r = 0.31). Additionally, agreeableness was observed to be positively correlated with overall team climate (r = 0.35). Further, from regression models, we observed that personality traits accountedto less than 15% of the variance in team climate.

    Conclusion: A person’s ability to easily get along with team members (agreeableness) has a significant positive influence on the perceived level of team climate. Results from our regression analysis suggest that further data may be needed, and/or there are other human factors, in addition to personality traits, that should also be investigated with regard to their relationship with team climate. Overall, the relationships identified in our study are likely to be applicable to organizations within the telecommunications domain that use scrum methodology for software development.

  • 92.
    Vishnubhotla, Sai Datta
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Mendes, Emilia
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Lundberg, Lars
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Understanding the Perceived Relevance of Capability Measures: A Survey of Agile Software Development PractitionersIn: Article in journal (Other academic)
    Abstract [en]

    Context: A significant number of studies discussed various human-aspects of software engineers over the past years. However, in the light of swift, incremental and iterative nature of Agile Software Development (ASD) practices, establishing deeper insights into capability measurement is crucial, as both individual and team capability can affect software development performance and project success.

    Objective: Our study investigates how agile practitioners perceive the relevance of individual and team level measures, pertaining to professional, social and innovative aspects, for characterizing the capability of an agile team and its members.

    Method: We undertook a Web-based survey using a questionnaire built based on the capability measures identified from our previous Systematic Literature Review (SLR). This questionnaire sought information about agile practitioners’ perceptions of individual and team capability measures.

    Results: We received 60 usable responses, corresponding to a response rate of 17% from the original sampling frame. Our results indicate that 127 individual and 28 team capability measures were considered as relevant by majority of the practitioners. Our survey also identified seven individual and one team capability measure which have not been previously characterized by our SLR.

    Conclusion: In practitioners’ opinion, an agile team member’s state of being answerable or accountable for things within one's control (responsibility) and the ability to feel or express doubt and raise objections (questioning skills), are the two measures that significantly represent the member’s capability. Overall, the findings from our study shed light on the sparsely explored field of capability measurement in ASD. Our results can be helpful to practitioners in reforming their team composition decisions.

  • 93.
    Warnhag, Oskar
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Wedzinga, Nick
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Evaluating the Effects of Pre-Attentive Processing on Player Performance and Perception in Platform Video Games2019Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
  • 94.
    Westphal, Florian
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Lavesson, Niklas
    Grahn, Håkan
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    A Case for Guided Machine Learning2019In: Machine Learning and Knowledge Extraction / [ed] Andreas Hozinger, Peter Kieseberg, A Min Tjoa and Edgar Weippl, Springer, 2019, p. 353-361Conference paper (Refereed)
    Abstract [en]

    Involving humans in the learning process of a machine learning algorithm can have many advantages ranging from establishing trust into a particular model to added personalization capabilities to reducing labeling efforts. While these approaches are commonly summarized under the term interactive machine learning (iML), no unambiguous definition of iML exists to clearly define this area of research. In this position paper, we discuss the shortcomings of current definitions of iML and propose and define the term guided machine learning (gML) as an alternative.

  • 95.
    Willemsen, Mattias
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Evaluating player performance and usability of graphical FPS interfaces in VR2019Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Background. When designing video games for Virtual Reality, graphical user interfaces (GUIs) cannot always be designed as they have been for traditional video games. An often recommended approach is to merge the interface with the game world, but it is unclear if this is the best idea in all cases. As the market for Virtual Reality is growing quickly, more research is needed to create an understanding of how GUIs should be designed for Virtual Reality.

    Objectives. The thesis researches existing GUI type classifications and adapts them for VR. A method to compare the GUI types to each other is selected and conclusions are drawn about how they affect player performance, usability, and preference. 

    Methods. A VR FPS game is developed and an experiment is designed using it. The experiment tests the player's performance with each of three distinct GUI types and also presents questionnaires to get their personal preference.

    Results. Both player performance and subjective opinion seems to favour geometric GUI.

    Conclusions. The often recommended approach for designing GUI elements as part of the game world may not always be the best option, as it may sacrifice usability and performance for immersion.

  • 96.
    Wu, Qian
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Segmentation-based Retinal Image Analysis2019Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Context. Diabetic retinopathy is the most common cause of new cases of legal blindness in people of working age. Early diagnosis is the key to slowing the progression of the disease, thus preventing blindness. Retinal fundus image is an important basis for judging these retinal diseases. With the development of technology, computer-aided diagnosis is widely used.

    Objectives. The thesis is to investigate whether there exist specific regions that could assist in better prediction of the retinopathy disease, it means to find the best region in fundus image that works the best in retinopathy classification with the use of computer vision and machine learning techniques.

    Methods. An experiment method was used as research methods. With image segmentation techniques, the fundus image is divided into regions to obtain the optic disc dataset, blood vessel dataset, and other regions (regions other than blood vessel and optic disk) dataset. These datasets and original fundus image dataset were tested on Random Forest (RF), Support Vector Machines (SVM) and Convolutional Neural Network (CNN) models, respectively.

    Results. It is found that the results on different models are inconsistent. As compared to the original fundus image, the blood vessel region exhibits the best performance on SVM model, the other regions perform best on RF model, while the original fundus image has higher prediction accuracy on CNN model. Conclusions. The other regions dataset has more predictive power than original fundus image dataset on RF and SVM models. On CNN model, extracting features from the fundus image does not significantly improve predictive performance as compared to the entire fundus image.

  • 97.
    Åleskog, Christoffer
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Ljungberg Fayyazuddin, Salomon
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Comparing node-sorting algorithms for multi-goal pathfinding with obstacles2019Independent thesis Basic level (degree of Bachelor), 10 credits / 15 HE creditsStudent thesis
    Abstract [en]

    Background. Pathfinding plays a big role in both digital games and robotics, and is used in many different ways. One of them is multi-goal pathfinding (MGPF) which is used to calculate paths from a start position to a destination with the condition that the resulting path goes though a series of goals on the way to the destination. For the most part research on this topic is sparse, and when the complexity is increased through obstacles that are introduced to the scenario, there are only a few articles in the field that relate to the problem.Objectives. The objective in this thesis is to conduct an experiment to compare four algorithms for solving the MGPF problem on six different maps with obstacles, and then analyze and draw conclusions on which of the algorithms is best suited to use for the MGPF problem. The first is the traditional Nearest Neighbor algorithm, the second is a variation on the Greedy Search algorithm, and the third and fourth are variations on the Nearest Neighbor algorithm. Methods. To reach the Objectives all the four algorithms are tested fifty times on six different maps of varying sizes and obstacle layout. Results. The data from the experiment is compiled in graphs for all the different maps, with the time to calculate a path and the path lengths as the metrics. The averages of all the metrics are put in tables to visualize the difference between the results for the four algorithms.Conclusions. The conclusions were that the dynamic version of the Nearest Neighbor algorithm has the best result if both the metrics are taken into account. Otherwise the common Nearest Neighbor algorithm gives the best results in respect to the time taken to calculate the paths and the Greedy Search algorithm creates the shortest paths of all the tested algorithms.

12 51 - 97 of 97
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf