Change search
Refine search result
1 - 30 of 30
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Rows per page
  • 5
  • 10
  • 20
  • 50
  • 100
  • 250
Sort
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
Select
The maximal number of hits you can export is 250. When you want to export more records please use the Create feeds function.
  • 1.
    Ahmad, Al Ghaith
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Abd ULRAHMAN, Ibrahim
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Matching ESCF Prescribed Cyber Security Skills with the Swedish Job Market: Evaluating the Effectiveness of a Language Model2023Independent thesis Basic level (degree of Bachelor), 12 credits / 18 HE creditsStudent thesis
    Abstract [en]

    Background: As the demand for cybersecurity professionals continues to rise, it is crucial to identify the key skills necessary to thrive in this field. This research project sheds light on the cybersecurity skills landscape by analyzing the recommendations provided by the European Cybersecurity Skills Framework (ECSF), examining the most required skills in the Swedish job market, and investigating the common skills identified through the findings. The project utilizes the large language model, ChatGPT, to classify common cybersecurity skills and evaluate its accuracy compared to human classification.

    Objective: The primary objective of this research is to examine the alignment between the European Cybersecurity Skills Framework (ECSF) and the specific skill demands of the Swedish cybersecurity job market. This study aims to identify common skills and evaluate the effectiveness of a Language Model (ChatGPT) in categorizing jobs based on ECSF profiles. Additionally, it seeks to provide valuable insights for educational institutions and policymakers aiming to enhance workforce development in the cybersecurity sector.

    Methods: The research begins with a review of the European Cybersecurity Skills Framework (ECSF) to understand its recommendations and methodology for defining cybersecurity skills as well as delineating the cybersecurity profiles along with their corresponding key cybersecurity skills as outlined by ECSF. Subsequently, a Python-based web crawler, implemented to gather data on cybersecurity job announcements from the Swedish Employment Agency's website. This data is analyzed to identify the most frequently required cybersecurity skills sought by employers in Sweden. The Language Model (ChatGPT) is utilized to classify these positions according to ECSF profiles. Concurrently, two human agents manually categorize jobs to serve as a benchmark for evaluating the accuracy of the Language Model. This allows for a comprehensive assessment of its performance.

    Results: The study thoroughly reviews and cites the recommended skills outlined by the ECSF, offering a comprehensive European perspective on key cybersecurity skills (Tables 4 and 5). Additionally, it identifies the most in-demand skills in the Swedish job market, as illustrated in Figure 6. The research reveals the matching between ECSF-prescribed skills in different profiles and those sought after in the Swedish cybersecurity market. The skills of the profiles 'Cybersecurity Implementer' and 'Cybersecurity Architect' emerge as particularly critical, representing over 58% of the market demand. This research further highlights shared skills across various profiles (Table 7).

    Conclusion: This study highlights the matching between the European Cybersecurity Skills Framework (ECSF) recommendations and the evolving demands of the Swedish cybersecurity job market. Through a review of ECSF-prescribed skills and a thorough examination of the Swedish job landscape, this research identifies crucial areas of alignment. Significantly, the skills associated with 'Cybersecurity Implementer' and 'Cybersecurity Architect' profiles emerge as central, collectively constituting over 58% of market demand. This emphasizes the urgent need for educational programs to adapt and harmonize with industry requisites. Moreover, the study advances our understanding of the Language Model's effectiveness in job categorization. The findings hold significant implications for workforce development strategies and educational policies within the cybersecurity domain, underscoring the pivotal role of informed skills development in meeting the evolving needs of the cybersecurity workforce.

    Download full text (pdf)
    Matching ESCF Prescribed Cyber Security Skills with the Swedish Job Market: Evaluating the Effectiveness of a Language Model
  • 2.
    Aravapalli, Naga Sai Gayathri
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Palegar, Manoj Kumar
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Comparision of Machine Learning Algorithms on Identifying Autism Spectrum Disorder2023Independent thesis Basic level (degree of Bachelor), 10 credits / 15 HE creditsStudent thesis
    Abstract [en]

    Background: Autism Spectrum Disorder (ASD) is a complex neurodevelopmen-tal disorder that affects social communication, behavior, and cognitive development.Patients with autism have a variety of difficulties, such as sensory impairments, at-tention issues, learning disabilities, mental health issues like anxiety and depression,as well as motor and learning issues. The World Health Organization (WHO) es-timates that one in 100 children have ASD. Although ASD cannot be completelytreated, early identification of its symptoms might lessen its impact. Early identifi-cation of ASD can significantly improve the outcome of interventions and therapies.So, it is important to identify the disorder early. Machine learning algorithms canhelp in predicting ASD. In this thesis, Support Vector Machine (SVM) and RandomForest (RF) are the algorithms used to predict ASD.

    Objectives: The main objective of this thesis is to build and train the models usingmachine learning(ML) algorithms with the default parameters and with the hyper-parameter tuning and find out the most accurate model based on the comparison oftwo experiments to predict whether a person is suffering from ASD or not.

    Methods: Experimentation is the method chosen to answer the research questions.Experimentation helped in finding out the most accurate model to predict ASD. Ex-perimentation is followed by data preparation with splitting of data and by applyingfeature selection to the dataset. After the experimentation followed by two exper-iments, the models were trained to find the performance metrics with the defaultparameters, and the models were trained to find the performance with the hyper-parameter tuning. Based on the comparison, the most accurate model was appliedto predict ASD.

    Results: In this thesis, we have chosen two algorithms SVM and RF algorithms totrain the models. Upon experimentation and training of the models using algorithmswith hyperparameter tuning. SVM obtained the highest accuracy score and f1 scoresfor test data are 96% and 97% compared to other model RF which helps in predictingASD.

    Conclusions: The models were trained using two ML algorithms SVM and RF andconducted two experiments, in experiment-1 the models were trained using defaultparameters and obtained accuracy, f1 scores for the test data, and in experiment-2the models were trained using hyper-parameter tuning and obtained the performancemetrics such as accuracy and f1 score for the test data. By comparing the perfor-mance metrics, we came to the conclusion that SVM is the most accurate algorithmfor predicting ASD.

    Download full text (pdf)
    Comparison of Machine Learning Algorithms on Identifying Autism Spectrum Disorder
  • 3.
    Axelsson, Jonas
    Blekinge Institute of Technology, Faculty of Computing, Department of Creative Technologies.
    Comparison of user accuracy and speed when performing 3D game target practice using a computer monitor and virtual reality headset2017Independent thesis Advanced level (professional degree), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Consumer grade Virtual Reality (VR)-headsets are on the rise, and with them comes an increasing number of digital games which support VR. How players perceive the gameplay and how well they perform at the games tasks can be key factors to designing new games.

    This master’s thesis aims to evaluate if a user can performa game task, specifically a target practice, in less time and/or more accurately when using a VR-headset as opposed to a computer screen and mouse. To gather statistics and measure the differences, an experiment was conducted using a test application developed alongside this report. The experiment recorded accuracy scores and time taken in tests performed by 35 test participants using both a VR-headset and computer screen.

    The resulting data sets are presented in the results chapter of this report. A Kolmogorov-Smirnov Normality Test and Student’s paired samples t-test was performed on the data to establish its statistical significance. After analysis, the results are reviewed, discussed and conclusions are made.

    This study concludes that when performing the experiment, the use of a VR-headset decreased the users accuracy and to a lesser extent also increased the time the user took to hit all targets. An argument was made that the longer previous experience with computer screen and mouse of most users gave this method an unfair advantage. With equally long training, VR use might score similar results.

    Download full text (pdf)
    fulltext
  • 4.
    Axelsson, Sam
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Eriksson, Filip
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Performance Analysis of a Godot Game-Agnostic Streaming Tool2023Independent thesis Advanced level (professional degree), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Background. Streaming games is traditionally done with video and audio both for watching on websites like Twitch and YouTube or playing via cloud gaming services. Streaming with video and audio requires good internet speeds to be of satisfactory quality therefore compression algorithms are used. Compression algorithms decrease bandwidth usage but it also lowers the quality of the stream. An alternative would be to stream game states and user inputs to recreate the game state for the viewer, this would lower the bandwidth usage while not compromising the quality.

    Objectives. This thesis aims to explore and compare a generalized streaming tool for the Godot engine. Where game states and user inputs are sent between two game instances to synchronize the host game with the client game. The tool will then be compared to a video and audio streaming setup in terms of image quality, bandwidth, and processing power.

    Methods. A combination of state replication and client simulation has been implemented for a streaming tool for games. Bandwidth, image quality, and processing power metrics are gathered for seven games for streaming with state replication and client simulation. The performance metrics have also been gathered when streaming video and audio data. To validate the streaming tool, the seven games were visually compared between images from the host and client of the streaming tool.

    Results. Compared to streaming video and audio data there was shown to be an overhead for streaming game states and user inputs. This overhead causes multiple games to have significant performance issues in terms of processing power for the CPU. In terms of image quality and bandwidth, the generalized streaming tool performed better. 

    Conclusions. The results showed that there is a possibility for a generalized streaming tool for the Godot engine to be successfully implemented. The implementation of the Godot streaming tool didn't work perfectly for each tested game, but most games use less bandwidth and there's no quality loss regarding the image quality. However, the streaming tool requires better hardware than traditional video and audio streaming.

    Download full text (pdf)
    fulltext
  • 5.
    Bengtsson, Hampus
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Knutsmark, Ludvig
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Lagring av sekretessreglerade uppgifter i molntjänster: En analys kring förutsättningar för användning av molnleverantörer bland myndigheter2020Independent thesis Advanced level (professional degree), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Background: Swedish authorities' use of popular cloud providers is today the subject of an intense debate. Legislations, like the U.S. CLOUD Act, are applicable across borders, which makes data that is stored on servers located in Sweden affected by U.S. law. Several Swedish organizations mean that the usage of affected cloud providers for storage of sensitive records breaks the Swedish law - Offentlighets- och sekretesslagen. The program for collaboration between Swedish authorities, eSam, says that there is a possibility of withstanding the law, if suitable encryption is used, but states that more research is needed.

    Objectives: The main objective of this thesis is to research which requirements for encryption mechanisms are needed for Swedish authorities' use of cloud providers affected by legislations like CLOUD Act, without them breaking Swedish laws. The three most popular cloud providers, Amazon Web Services (AWS), Microsoft Azure, and Google Cloud will be compared and examined if requirements on encryption are met. Historically, providers' access to encryption keys is a major threat to data confidentiality. Therefore an alternative encryption method that withholds both encryption keys and clear text, but preserves functionality will be researched.

    Method: To create fair and good requirements on encryption mechanisms, several threat models are created from the perspective of today's- and future laws. A SWOT-analysis will be used to compare the cloud providers. To research the possibility and usability of alternative encryption in the cloud, a system that withholds both encryption keys and clear text data from the provider is proposed.

    Result: The result shows that the most popular services like Office 365 and G Suite are not suitable for use by Swedish authorities for the storage of sensitive records. Instead, Swedish authorities can use IaaS-services from both AWS and Microsoft Azure for storage of sensitive records - if the requirements for encryption mechanisms are met. The result also shows that alternative encryption methods can be used as part of a document management system.

    Conclusion: Swedish authorities should strive to expand their digitalization but should be careful about the usage of cloud providers. If laws change, or political tensions rise, the requirements for the encryption mechanisms proposed in this thesis would not be applicable. In a different situation, Swedish authorities should use alternative solutions which are not affected by an altered situation. One such alternative solution is the document management system proposed in this thesis.

    Download full text (pdf)
    fulltext
  • 6.
    Bredfell, Adam
    et al.
    Blekinge Institute of Technology, Faculty of Engineering, Department of Industrial Economics.
    Roll, Gustav
    Blekinge Institute of Technology, Faculty of Engineering, Department of Industrial Economics.
    Predicting Cross-Platform Performance: A Case Study on Evaluating Predictive Models and Exploring the Economic Consequences in Software Testing2023Independent thesis Advanced level (professional degree), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Background: In today's digital world, there is increasing importance on cross-platform performance testing and the challenges faced by businesses in achieving efficient performance for applications across multiple platforms. Predictive models, such as machine learning and regression, have emerged as potential solutions to predict performance to be quickly analyzed, thus eliminating the need to execute an entire environment. Predicting performance can help firms save time and resources to keep pace with market demand, but potential risks and limitations need to be considered. With the increasing availability of data, predictive models have become effective problem-solving methods in various industries, including the testing industry. Objectives: This research aims to investigate the economic consequences and opportunities of implementing predictive models to predict cross-platform performance for firms operating in the software market and evaluate the performance of three models when predicting cross-platform performance. The study aims to add arguments to help businesses make informed decisions on the adoption of predictive models. Methods: The methodology employed in this research involved evaluating Multiple Linear Regression, Multiple Neural Network, and Random Forest, to gain insight into how such models perform when predicting performance. In addition to this analysis, interviews were conducted with industry experts to get an understanding of current processes and the potential benefits of adopting predictive models to identify the economic consequences of implementing such models. Results: The result shows that Multiple Linear Regression was the most promising one, with an R2 value of 0.79. Additionally, the research revealed that the current testing process faces difficulties when testing on multiple platforms. While predicting performance can provide cost and time savings, challenges and risks, such as data privacy and model trust, must also be considered. Conclusions: Multiple Linear Regression exhibited the most favorable performance among the evaluated models, with consistent results across all test runs and indicating a linear relationship. The economic consequences identified were the continuously required maintenance and updates of predictive models to remain accurate throughout the lifecycle. This implies ongoing costs, such as the complexity and cost of generating and storing the necessary data to train the models. Thus, the adoption of predictive models is still in its early stages, and while there are significant benefits, there are also challenges to address.

    Download full text (pdf)
    fulltext
  • 7.
    Cherukuri, Prudhvi Nath Naidu
    et al.
    Blekinge Institute of Technology.
    Ganja, Sree Kavya
    Blekinge Institute of Technology.
    Comparison of GCP and AWS using usability heuristic and cognitive walkthrough while creating and launching Virtual Machine instances in VirtualPrivate Cloud2021Independent thesis Basic level (degree of Bachelor), 10 credits / 15 HE creditsStudent thesis
    Abstract [en]

    ABSTRACT

    Cloud computing has become increasingly important over the years, as the need for computational resources, data storage, and networking capabilities in the field of information technology has been increased. There are several large corporations that offer these services to small companies or to end-users such as GCP, AWS, Microsoft Azure, IBM Cloud, and many more. The main aim of this thesis is to perform the comparison of GCP and AWS consoles in terms of the user interface while performing tasks related to compute engine. The cognitive walkthrough has been performed on tasks such as the creation of VPC, creation of VM instances, and launching them and then from the results, both the interfaces are compared using usability heuristics.

    Background: As the usage of cloud computing has increased over the years, the companies that are offering these services have grown eventually. Though there are many cloud services available in the market the user might always choose the services that are more flexible and efficient to use. In this manner, the choice of our research is made to compare the cloud services in terms of user interaction user experience. As we dig deep into the topic of user interaction and experience there are evaluation techniques and principles such as cognitive walkthrough and usability heuristics are suitable for our research. Here the comparison is made among GCP and AWS user interfaces while performing some tasks related to compute engine.

    Objectives: The main objectives of this thesis are to create VPC, VM instances,s and launch VM instances in two different cloud services such as GCP and AWS. To find out the best user interface among these two cloud services from the perspective of the user.

    Method: The process of finding best user interface among GCP and AWS cloud services is based on the cognitive walkthrough and comparing with usability heuristics. The cognitive walkthrough is performed on chosen tasks in both the services and then compared using usability heuristics to get the results of our research.

    Results: The results that are obtained from cognitive walkthrough and comparison with usability heuristics shown in graphical formats such as bar graphs, pie charts, and the comparison results are shown in the form of tabular form. The results cannot be universal, as they are just observational results from cognitive walkthrough and usability heuristic evaluation.

    Conclusion: After performing the above-mentioned methods it is observed that the user interface of GCP is more flexible and efficient in terms of user interaction and experience. Though the user experience may vary based on the experience level of users in cloud services, as per our research the novice user and moderate users have chosen GCP as a better interactive system over AWS.

    Keywords: Cloud computing, VM instance, Cognitive walkthrough, Usability heuristics, User-interface.

    Download full text (pdf)
    Comparison of GCP and AWS using usability heuristic and cognitive walkthrough while creating and launching Virtual Machine instances in Virtual Private Cloud
  • 8.
    Ekelund, Stefan
    et al.
    Blekinge Institute of Technology.
    Bengter, Johan
    Blekinge Institute of Technology.
    Inlevelse genom realism: En undersökning om relation mellan inlevelse och realism2016Independent thesis Basic level (degree of Bachelor), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    For this bachelor’s thesis we have investigated how we can make use of realism in order to create immersion in a virtual environment. Our goal is to investigate the relationship between realism an immersion in order to create an understanding of how to create immersion and build better gaming experiences. To investigate the problem area, we have created a 3D environment where we used realism as a foundation to discuss our choices and the choices made in other games. We concluded that there is no perfect method to fully measure and define immersion but we found strong connections between realism and immersion. In the end we felt that you should always work with realism as a foundation when attempting to create immersion in a virtual environment.

    Download full text (pdf)
    BTH2016Bengter
  • 9.
    El-Fouly, Fatma H.
    et al.
    Higher Institute of Engineering, El-Shorouk Academy, EGY.
    Khedr, Ahmed Y.
    University of Ha’il, SAU.
    Sharif, Md. Haidar
    University of Ha’il, SAU.
    Alreshidi, Eissa Jaber
    University of Ha’il, SAU.
    Yadav, Kusum
    University of Ha’il, SAU.
    Kusetogullari, Hüseyin
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Ramadan, Rabie A.
    University of Ha’il, SAU.
    ERCP: Energy-Efficient and Reliable-Aware Clustering Protocol for Wireless Sensor Networks2022In: Sensors, E-ISSN 1424-8220, Vol. 22, no 22, article id 8950Article in journal (Refereed)
    Abstract [en]

    Wireless Sensor Networks (WSNs) have been around for over a decade and have been used in many important applications. Energy and reliability are two of the major problems with these kinds of applications. Reliable data delivery is an important issue in WSNs because it is a key part of how well data are sent. At the same time, energy consumption in battery-based sensors is another challenge. Therefore, efficient clustering and routing are techniques that can be used to save sensors energy and guarantee reliable message delivery. With this in mind, this paper develops an energy-efficient and reliable clustering protocol (ERCP) for WSNs. First, an efficient clustering technique is proposed for sensor nodes’ energy savings considering different clustering parameters, including the link quality metric, the energy, the distance to neighbors, the distance to the sink node, and the cluster load metric. The proposed routing protocol works based on the concept of a reliable inter-cluster routing technique that saves energy. The routing decisions are made based on different parameters, such as the energy balance metric, the distance to the sink node, and the wireless link quality. Many experiments and analyses are examined to determine how well the ERCP performs. The experiment results showed that the ECRP protocol performs much better than some of the recent algorithms in both homogeneous and heterogeneous networks. © 2022 by the authors.

    Download full text (pdf)
    fulltext
  • 10.
    Gustafsson, Fanny
    Blekinge Institute of Technology.
    Investigating whether Elements of Fun in a Gamification Tool Increases Students' Test Results in School: Comparing a Gamification Tool and an Educational Test2017Independent thesis Basic level (degree of Bachelor), 10 credits / 15 HE creditsStudent thesis
    Abstract [en]

    For achieving an environment where students think it is more fun to learn in school, a Gamification tool could be applied. Today these tools have been more commonly used, as the digital world evolves. There must be an understanding of how these tools affect the students, in order to use them in school environments. This study investigates whether elements of fun in a Gamification tool increases students' test results, by comparing a Gamification tool and an educational test in school. The results could be analysed by gathering data from two different tests, one with the Gamification tool: Kahoot, and one without it, performed by 16 middle-school students. As a part of the experiment, the students answered a survey including questions about whether they learned something from using a digital tool in school. Five of the students participated in an interview, which purpose was to open up further discussions that could provide valuable information for answering the research question. The results of this study show that the test results did not increase in the use of the gamified tool. Most of the students had fewer right answers in Kahoot compared to the traditional test. No statistically significant change occurred between the learning tools, which proved the null hypothesis. According to the survey, most of the students thought it was more fun using a digital tool in school. The result shows that the use of a Gamification tool affects students' learning negatively, due to the decrease of performance in the test in Kahoot compared to the traditional test. According to the interview, the notable difference between the learning tools is that the gamified tool could potentially create a stressful environment in the classroom compared to the traditional written test. This could be the reason for the decrease in performance of the test in Kahoot. This experiment creates room for further research in this field, but a suggestion would be to do it on a larger scale and to learn more about the elements of fun in learning environments, to get more valid results.

  • 11.
    Holmqvist Berlin, Theo
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Asynchronous Divergence-Free Smoothed Particle Hydrodynamics2021Independent thesis Advanced level (professional degree), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Background. Fluid simulation is an area of ongoing research. In recent years, simulators have become more realistic and stable, partly by employing the condition of having divergence-free velocity fields. A divergence-free velocity field is a strict constraint that requires a high level of correctness in a simulation. Another recent development is in the subject of performance optimization, where asynchronous time integration is used. Asynchronous time integration means integrating different parts of a fluid with varying time step sizes. Doing so leads to overall larger time step sizes, which improves performance. This thesis combines the divergence-free velocity field condition with asynchronous time stepping in a particle-based simulator.

    Objectives. This thesis aims to achieve a performance speedup by implementing asynchronous time integration into an existing particle-based simulator that assures the velocity field is divergence-free.

    Methods. With an open source simulator employing a divergence-free velocity field as a starting point, asynchronous time integration is implemented. This is achieved by dividing the fluid into three regions, each with their own time step sizes. Introducing asynchronous time integration means significantly lowering the stability of a simulation. This is countered by implementing additional steps to increase stability.

    Results. Roughly a 40\% speedup is achieved in two out of three scenes, with similar visual results as the original synchronous simulation. In the third scene, there is no performance speedup as the performance is similar to that of the original simulation. The two first scenes could be sped up further with more aggressive settings for asynchronous time integration. This is however not possible due to stability issues, which are also the cause for the third scene not resulting in any speedup.

    Conclusions. Asynchronous simulation is shown to be a valid option even alongside a divergence solver. However, occasional unrealistic behavior resembling explosions among the particles do occur. Besides from being undesirable behavior, these explosions also decrease performance and prevent more aggressive performance settings from being used. Analysis of their cause, attempted solutions and potential future solutions are provided in the discussion chapter.

    Download full text (pdf)
    fulltext
  • 12.
    Isenstierna, Tobias
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Popovic, Stefan
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Computer systems in airborne radar: Virtualization and load balancing of nodes2019Independent thesis Advanced level (professional degree), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Introduction. For hardware used in radar systems of today, technology is evolving in an increasing rate. For existing software in radar systems, relying on specific drivers or hardware, this quickly becomes a problem. When hardware required is no longer produced or outdated, compatibility problems emerges between the new hardware and existing software. This research will focus on exploring if the virtualization technology can be helpful in solving this problem. Would it be possible to address the compatibility problem with the help of hypervisor solutions, while also maintaining high performance?

    Objectives. The aim with this research is to explore the virtualization technology with focus on hypervisors, to improve the way that hardware and software cooperate within a radar system. The research will investigate if it is possible to solve compatibility problems between new hardware and already existing software, while also analysing the performance of virtual solutions compared to non-virtualized.

    Methods. The proposed method is an experiment were the two hypervisors Xen and KVM will analysed. The hypervisors will be running on two different systems. A native environment with similarities to a radar system will be built and then compared with the same system, but now with hypervisor solutions applied. Research around the area of virtualization will be conducted with focus on security, hypervisor features and compatibility.

    Results. The results will present a proposed virtual environment setup with the hypervisors installed. To address the compatibility issue, an old operating system has been used to prove that implemented virtualization works. Finally performance results are presented for the native environment compared against a virtual environment.

    Conclusions. From results gathered with benchmarks, we can see that the individual performance might vary, which is to be expected when used on different hardware. A virtual setup has been built, including Xen and KVM hypervisors, together with NAS communication. Running an old operating system as a virtual guest, compatibility has been proven to exist between software and hardware using KVM as the virtual solution. From the results gathered, KVM seems like a good solution to investigate more.

    Download full text (pdf)
    fulltext
  • 13.
    Jlali, Yousra Ramdhana
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Training an Adversarial Non-Player Character with an AI Demonstrator: Applying Unity ML-Agents2022Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Background. Game developers are continuously searching for new ways of populating their vast game worlds with competent and engaging Non-Player Characters (NPCs), and researchers believe Deep Reinforcement Learning (DRL) might be the solution for emergent behavior. Consequently, fusing NPCs with DRL practices has surged in recent years, however, proposed solutions rarely outperform traditional script-based NPCs.

    Objectives. This thesis explores a novel method of developing an adversarial DRL NPC by combining Reinforcement Learning (RL) algorithms. Our goal is to produce an agent that surpasses its script-based opponents by first mimicking their actions.

    Methods. The experiment commences with Imitation Learning (IL) before proceeding with supplementary DRL training where the agent is expected to improve its strategies. Lastly, we make all agents participate in 100-deathmatch tournaments to statistically evaluate and differentiate their deathmatch performances.

    Results. Statistical tests reveal that the agents reliably differ from one another and that our learning agent performed poorly in comparison to its script-based opponents.

    Conclusions. Based on our computed statistics, we can conclude that our solution was unsuccessful in developing a talented hostile DRL agent as it was unable to convey any form of proficiency in deathmatches. No further improvements could be applied to our ML agent due to the time constraints. However, we believe our outcome can be used as a stepping-stone for future experiments within this branch of research.

    Download full text (pdf)
    Training an Adversarial Non-Player Character with an AI Demonstrator - Applying Unity ML-Agents
  • 14.
    Johansson, Christian
    et al.
    NODA, SWE.
    Bergkvist, Markus
    NODA, SWE.
    Geysen, Davy
    EnergyVille, BEL.
    De Somer, Oscar
    EnergyVille, BEL.
    Lavesson, Niklas
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Vanhoudt, Dirk
    EnergyVille, BEL.
    Operational Demand Forecasting In District Heating Systems Using Ensembles Of Online Machine Learning Algorithms2017In: 15TH INTERNATIONAL SYMPOSIUM ON DISTRICT HEATING AND COOLING (DHC15-2016) / [ed] Ulseth, R, ELSEVIER SCIENCE BV , 2017, p. 208-216Conference paper (Refereed)
    Abstract [en]

    Heat demand forecasting is in one form or another an integrated part of most optimisation solutions for district heating and cooling (DHC). Since DHC systems are demand driven, the ability to forecast this behaviour becomes an important part of most overall energy efficiency efforts. This paper presents the current status and results from extensive work in the development, implementation and operational service of online machine learning algorithms for demand forecasting. Recent results and experiences are compared to results predicted by previous work done by the authors. The prior work, based mainly on certain decision tree based regression algorithms, is expanded to include other forms of decision tree solutions as well as neural network based approaches. These algorithms are analysed both individually and combined in an ensemble solution. Furthermore, the paper also describes the practical implementation and commissioning of the system in two different operational settings where the data streams are analysed online in real-time. It is shown that the results are in line with expectations based on prior work, and that the demand predictions have a robust behaviour within acceptable error margins. Applications of such predictions in relation to intelligent network controllers for district heating are explored and the initial results of such systems are discussed. (C) 2017 The Authors. Published by Elsevier Ltd.

    Download full text (pdf)
    fulltext
  • 15.
    Jusufi, Ilir
    et al.
    University of California Davis, USA.
    Nyholm, Dag
    Uppsala University.
    Memedi, Mevludin
    Högskolan Dalarna.
    Visualization of spiral drawing data of patients with Parkinson’s disease2014In: Information Visualisation (IV), 2014, 2014, p. 346-350Conference paper (Refereed)
    Abstract [en]

    Patients with Parkinson’s disease (PD) need to be frequently monitored in order to assess their individual symptoms and treatment-related complications. Advances in technology have introduced telemedicine for patients in remote locations. However, data produced in such settings lack much information and are not easy to analyze or interpret compared to traditional, direct contact between the patient and clinician. Therefore, there is a need to present the data using visualization techniques in order to communicate in an understandable and objective manner to the clinician. This paper presents interaction and visualization approaches used to aid clinicians in the analysis of repeated measures of spirography of PD patients gathered by means of a telemetry touch screen device. The proposed approach enables clinicians to observe fine motor impairments and identify motor fluctuations of their patients while they perform the tests from their homes using the telemetry device.

  • 16.
    Kaspersson, Max
    Blekinge Institute of Technology, Faculty of Computing, Department of Creative Technologies.
    Facial Realism through Wrinkle Maps: The Perceived Impact of Different Dynamic Wrinkle Implementations2015Independent thesis Basic level (degree of Bachelor), 10 credits / 15 HE creditsStudent thesis
    Abstract [en]

    Context. Real time rendering has many challenges to overcome, one of them being character realism. One way to move towards realism is to use wrinkle maps. Although already used in several games, there might be room for improvement, common practice suggests using two wrinkle maps, however, if this number can be reduced both texture usage and workload might be reduced as well.

    Objectives. To determine whether or not it is possible to reduce the number of wrinkle maps from two to one without having any significant impact on the perceived realism of a character.

    Methods. After a base character model was created, a setup in Maya were made so that dynamic wrinkles could be displayed on the character using both one and two wrinkle maps. The face were animated and rendered, displaying emotions using both techniques. A two-alternative forced choice experiment was then conducted where the participants selected which implementation displaying the same facial expression and having the same lighting condition they perceived as most realistic.

    Results. Results showed that some facial expressions had more of an impact of the perceived realism than others, favoring two wrinkle maps in every case where there was a significant difference. The expressions with the most impact were the ones that required different kinds of wrinkles at the same area of the face, such as the forehead, where one variant of wrinkles run at a more vertical manner and the other variant runs horizontally along the forehead.

    Conclusions. Using one wrinkle map can not fully replicate the effect of using two when it comes to realism. The difference on the implementations are dependant on the expression being displayed.

    Download full text (pdf)
    fulltext
  • 17.
    Ljungberg, Alexander
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Smedberg, Simon
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Discovering and masking environmental features in modern sandboxes2022Independent thesis Advanced level (professional degree), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Background. The awareness of cyber attacks in businesses is increasing with the rising number of cyber incidents for businesses. With nearly 350 000 new malware detected per day, there is a big incentive to allocate resources to company infrastructure to mitigate malware. These solutions require scalability not to become bottlenecks and expensive. Therefore, to combat malware, automated solutions have been developed. The automated solutions comprises isolated virtual environments (sandbox), automated analysis, and reports. As a response from malware developers, malware has evolved to become aware of its environment, which has led to an arms race between malware developers and analysts.

    Objectives. In this thesis, we study how malware can identify sandbox environments and attempt to find appropriate values for masking system information (features).

    Methods. First, we research previous techniques to identify sandbox environments and consult with Windows environment experts from Truesec. We found 179 features to examine. Then, we gather a dataset of 2448 non-sandbox samples and 77 sandbox samples with a probing method. We use the statistical test Mann-Whitney U-test to identify features that differ between the dataset's groups. We conduct masking on a dataset level and evaluate it with a method similar to k-fold cross-validation using a random forest classifier. Furthermore, we analyze each feature's ability to detect sandboxes with the feature importance calculated by the Mean Decrease in Impurity (MDI).

    Results. We found 156 out of 179 features that reveal sandbox environments. Which seven out of those features could independently expose sandboxes, i.e., it was possible to classify all sandboxes and non-sandboxes with only one of them. The masking evaluation indicates that our proposed methods are effective at masking the sandboxes. The results of the feature importance showed that Windows Management Instrumentation (WMI) is an ideal source of information when it comes to exposing sandbox environments.

    Conclusions. Based on the result, we conclude that various values can expose a sandbox. Furthermore, we conclude that our method to find masking values is adequate and the proposed masking methods successfully masks sandbox samples. Lastly, we conclude that there needs to be a change of focus from evasion techniques to masking implementations in the research field.  

    Download full text (pdf)
    fulltext
  • 18.
    Lundberg, Lars
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Grahn, Håkan
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Ilie, Dragos
    Blekinge Institute of Technology, Faculty of Computing, Department of Communication Systems.
    Melander, Christian
    Compuverde AB.
    Cache Support in a High Performance Fault-Tolerant Distributed Storage System for Cloud and Big Data2015In: 2015 IEEE 29TH INTERNATIONAL PARALLEL AND DISTRIBUTED PROCESSING SYMPOSIUM WORKSHOPS, IEEE Computer Society, 2015, p. 537-546Conference paper (Refereed)
    Abstract [en]

    Due to the trends towards Big Data and Cloud Computing, one would like to provide large storage systems that are accessible by many servers. A shared storage can, however, become a performance bottleneck and a single-point of failure. Distributed storage systems provide a shared storage to the outside world, but internally they consist of a network of servers and disks, thus avoiding the performance bottleneck and single-point of failure problems. We introduce a cache in a distributed storage system. The cache system must be fault tolerant so that no data is lost in case of a hardware failure. This requirement excludes the use of the common write-invalidate cache consistency protocols. The cache is implemented and evaluated in two steps. The first step focuses on design decisions that improve the performance when only one server uses the same file. In the second step we extend the cache with features that focus on the case when more than one server access the same file. The cache improves the throughput significantly compared to having no cache. The two-step evaluation approach makes it possible to quantify how different design decisions affect the performance of different use cases.

  • 19.
    Martinsen, Jan Kasper
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Grahn, Håkan
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Isberg, Anders
    Sony Mobile Communications AB Lund, SWE.
    Combining thread-level speculation and just-in-time compilation in Google’s V8 JavaScript engine2017In: Concurrency and Computation, ISSN 1532-0626, E-ISSN 1532-0634, Vol. 29, no 1, article id e3826Article in journal (Refereed)
    Abstract [en]

    Summary: Thread-level speculation can be used to take advantage of multicore architectures for JavaScript in web applications. We extend previous studies with these main contributions; we implement thread-level speculation in the state-of-the art just-in-time-enabled JavaScript engine V8 and make the measurements in the Chromium web browser both from Google instead of using an interpreted JavaScript engine. We evaluate the thread-level speculation and just-in-time compilation combination on 15 very popular web applications, 20 HTML5 demos from the JS1K competition, and 4 Google Maps use cases. The performance is evaluated on two, four, and eight cores. The results clearly show that it is possible to successfully combine thread-level speculation and just-in-time compilation. This makes it possible to take advantage of multicore architectures for web applications while hiding the details of parallel programming from the programmer. Further, our results show an average speedup for the thread-level speculation and just-in-time compilation combination by a factor of almost 3 on four cores and over 4 on eight cores, without changing any of the JavaScript source code.

  • 20.
    Martinsen, Jan Kasper
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Grahn, Håkan
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Isberg, Anders
    Sony Mobile Communications AB.
    The Effects of Parameter Tuning in Software Thread-Level Speculation in JavaScript Engines2015In: ACM Transactions on Architecture and Code Optimization, ISSN 1544-3566, Vol. 11, no 4Article in journal (Refereed)
    Abstract [en]

    JavaScript is a sequential programming language that has a large potential for parallel execution in Web applications. Thread-level speculation can take advantage of this, but it has a large memory overhead. In this article, we evaluate the effects of adjusting various parameters for thread-level speculation. Our results clearly show that thread-level speculation is a useful technique for taking advantage of multicore architectures for JavaScript in Web applications, that nested speculation is required in thread-level speculation, and that the execution characteristics of Web applications significantly reduce the needed memory, the number of threads, and the depth of our speculation.

  • 21.
    Martinsen, Jan Kasper
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Grahn, Håkan
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Isberg, Anders
    Sony Mobile Communications AB.
    Sundström, Henrik
    Sony Mobile Communications AB.
    Reducing Memory in Software-Based Thread-Level Speculation for JavaScript Virtual Machine Execution of Web Applications2014In: 2014 IEEE INTERNATIONAL CONFERENCE ON HIGH PERFORMANCE COMPUTING AND COMMUNICATIONS, 2014 IEEE 6TH INTL SYMP ON CYBERSPACE SAFETY AND SECURITY, 2014 IEEE 11TH INTL CONF ON EMBEDDED SOFTWARE AND SYST (HPCC,CSS,ICESS), Elsevier, 2014, p. 181-184Conference paper (Refereed)
    Abstract [en]

    Thread-Level Speculation has been used to take advantage of multicore processors in virtual execution environments for the sequential JavaScript scripting language. While the results are promising the memory overhead is high. Here we propose to reduce the memory usage by limiting the checkpoint depth based on an in-depth study of the memory and execution time effects. We also propose an adaptive heuristic to dynamically adjust the checkpoints. We evaluate this using 15 web applications on an 8-core computer. The results show that the memory overhead is reduced for Thread-Level Speculation by over 90% as compared to storing all checkpoints. Further, the performance is often better than when storing all the checkpoints and at worst 4% slower.

  • 22.
    Matta, Durga Mahesh
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Saraf, Meet Kumar
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Prediction of COVID-19 using Machine Learning Techniques2020Independent thesis Basic level (degree of Bachelor), 10 credits / 15 HE creditsStudent thesis
    Abstract [en]

    Background: Over the past 4-5 months, the Coronavirus has rapidly spread to all parts of the world. Research is continuing to find a cure for this disease while there is no exact reason for this outbreak. As the number of cases to test for Coronavirus is increasing rapidly day by day, it is impossible to test due to the time and cost factors. Over recent years, machine learning has turned very reliable in the medical field. Using machine learning to predict COVID-19 in patients will reduce the time delay for the results of the medical tests and modulate health workers to give proper medical treatment to them.

    Objectives: The main goal of this thesis is to develop a machine learning model that could predict whether a patient is suffering from COVID-19. To develop such a model, a literature study alongside an experiment is set to identify a suitable algorithm. To assess the features that impact the prediction model.

    Methods: A Systematic Literature Review is performed to identify the most suitable algorithms for the prediction model. Then through the findings of the literature study, an experimental model is developed for prediction of COVID-19 and to identify the features that impact the model.

    Results: A set of algorithms were identified from the Literature study that includes SVM (Support Vector Machines), RF (Random Forests), ANN (Artificial Neural Network), which are suitable for prediction. Performance evaluation is conducted between the chosen algorithms to identify the technique with the highest accuracy. Feature importance values are generated to identify their impact on the prediction.

    Conclusions: Prediction of COVID-19 by using Machine Learning could help increase the speed of disease identification resulting in reduced mortality rate. Analyzing the results obtained from experiments, Random Forest (RF) was identified to perform better compared to other algorithms.

    Download full text (pdf)
    Prediction of COVID-19 using Machine Learning Techniques
  • 23.
    nagadevara, venkatesh
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Evaluation of Intrusion Detection Systems under Denial of Service Attack in virtual  Environment2017Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Context. The intrusion detection systems are being widely used for detecting the malicious

    traffic in many industries and they use a variety of technologies. Each IDs had different

    architecture and are deployed for detecting malicious activity. Intrusion detection system has

    a different set of rules which can defined based on requirement. Therefore, choosing intrusion

    detection system for and the appropriate environment is not an easy task.

    Objectives. The goal of this research is to evaluate three most used open source intrusion

    detection systems in terms of performance. And we give details about different types of attacks

    that can be detected using intrusion detection system. The tools that we select are Snort,

    Suricata, OSSEC.

    Methods. The experiment is conducted using TCP, SCAN, ICMP, FTP attack. Each

    experiment was run in different traffic rates under normal and malicious traffics all rule are

    active. All these tests are conducted in a virtual environment.

    Results. We can calculate the performance of IDS by using CPU usage, memory usage, packet

    loss and a number of alerts generated. These results are calculated for both normal and

    malicious traffic.

    Conclusions. We conclude that results vary in different IDS for different traffic rates.

    Specially snort showed better performance in alerts identification and OSSEC in the

    performance of IDS. These results indicated that alerts are low when the traffic rates high are

    which indicates this is due to the packet loss. Overall OSSEC provides better performance.

    And Snort provides better performance and accuracy for alert detection.

    Download full text (pdf)
    fulltext
  • 24.
    Nilsson, Eric
    et al.
    Intel Corp., SWE.
    Aarno, Daniel
    Intel Corp., SWE.
    Carstensen, Erik
    Intel Corp., SWE.
    Grahn, Håkan
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Accelerating Graphics in the Simics Full-System Simulator2015Conference paper (Refereed)
    Abstract [en]

    Virtual platforms provide benefits to developers in terms of a more rapid development cycle since development may begin before next-generation hardware is available. However, there is a distinct lack of graphics virtualization in industry-grade virtual platforms, leading to performance issues that may reduce the benefits virtual platforms otherwise have over execution on actual hardware. This paper demonstrates graphics acceleration by the means of paravirtualizing OpenGL ES in the Wind River Simics full-system simulator. We propose a solution for paravirtualized graphics using magic instructions to share memory between target and host systems, and present an implementation utilizing this method. The study illustrates the benefits and drawbacks of paravirtualized graphics acceleration and presents a performance analysis of strengths and weaknesses compared to software rasterization. Additionally, benchmarks are devised to stress key aspects in the solution, such as communication latency and computationally intensive applications. We assess paravirtualization as a viable method to accelerate graphics in system simulators, this reduces frame times up to 34 times compared to that of software rasterization. Furthermore, magic instructions are identified as the primary bottleneck of communication latency in the implementation.

  • 25.
    Pogén, Tobias
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Asynchronous Particle Calculations on Secondary GPU for Real Time Applications2019Independent thesis Basic level (degree of Bachelor), 10 credits / 15 HE creditsStudent thesis
    Download full text (pdf)
    BTH2019Pogén
  • 26.
    Posse, Oliver
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Tomanović, Ognjen
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Evaluation of Data Integrity Methods in Storage: Oracle Database2015Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Context. It is very common today that e-commerce systems store sensitiveclient information. The database administrators of these typesof systems have access to this sensitive client information and are ableto manipulate it. Therefore, data integrity is of core importance inthese systems and methods to detect fraudulent behavior need to beimplemented.

    Objectives. The objective of this thesis is to implement and evaluatethe features and performance impact of different methods for achievingdata integrity in a database, Oracle to be more exact.Methods. Five methods for achieving data integrity were tested.The methods were tested in a controlled environment. Three of themwas tested and performance evaluated by a tool emulating a real lifee-commerce scenario. The focus of this thesis is to evaluate the performanceimpact and the fraud detection ability of the implementedmethods.

    Results. This paper evaluates traditional Digital signature, Linkedtimestamping applied to a Merkle hash tree and Auditing performanceimpact and feature impact wise. Two more methods were implementedand tested in a controlled environment, Merkle hash tree and Digitalwatermarking. We showed results from the empirical analysis, dataverification and transaction performance. In our evaluation we provedour hypothesis that traditional Digital signature is faster than Linkedtimestamping.

    Conclusions. In this thesis we conclude that when choosing a dataintegrity method to implement it is of great importance to know whichtype of operation is more frequently used. Our experiments show thatthe Digital signature method performed better than Linked timestampingand Auditing. Our experiments did also conclude that applicationof Digital signature, Linked timestamping and Auditing decreasedthe performance by 4%, 12% and 27% respectively, which is arelatively small price to pay for data integrity.

    Download full text (pdf)
    Evaluation of Data Integrity Methods in Storage: Oracle Database
  • 27.
    Westphal, Florian
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Data and Time Efficient Historical Document Analysis2020Doctoral thesis, comprehensive summary (Other academic)
    Abstract [en]

    Over the last decades companies and government institutions have gathered vast collections of images of historical handwritten documents. In order to make these collections truly useful to the broader public, images suffering from degradations, such as faded ink, bleed through or stains, need to be made readable and the collections as a whole need to be made searchable. Readability can be achieved by separating text foreground from page background using document image binarization, while searchability by search string or by example image can be achieved through word spotting. Developing algorithms with reasonable binarization or word spotting performance is a difficult task. Additional challenges are to make these algorithms execute fast enough to process vast collections of images in a reasonable amount of time, and to enable them to learn from few labeled training samples. In this thesis, we explore heterogeneous computing, parameter prediction, and enhanced throughput as ways to reduce the execution time of document image binarization algorithms. We find that parameter prediction and mapping a heuristics based binarization algorithm to the GPU lead to an 1.7 and 3.5 increase in execution performance respectively. Furthermore, we identify for a learning based binarization algorithm using recurrent neural networks the number of pixels processed at once as way to trade off execution time with binarization quality. The achieved increase in throughput results in a 3.8 times faster overall execution time. Additionally, we explore guided machine learning (gML) as a possible approach to reduce the required amount of training data for learning based algorithms for binarization, character recognition and word spotting. We propose an initial gML system for binarization, which allows a user to improve an algorithm’s binarization quality by selecting suitable training samples. Based on this system, we identify and pursue three different directions, viz., formulation of a clear definition of gML, identification of an efficient knowledge transfer mechanism from user to learner, and automation of sample selection. We explore the Learning Using Privileged Information paradigm as a possible knowledge transfer mechanism by using character graphs as privileged information for training a neural network based character recognizer. Furthermore, we show that, given a suitable word image representation, automatic sample selection can help to reduce the amount of training data required for word spotting by up to 69%.

    Download full text (pdf)
    fulltext
  • 28.
    Westphal, Florian
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Efficient Document Image Binarization using Heterogeneous Computing and Interactive Machine Learning2018Licentiate thesis, comprehensive summary (Other academic)
    Abstract [en]

    Large collections of historical document images have been collected by companies and government institutions for decades. More recently, these collections have been made available to a larger public via the Internet. However, to make accessing them truly useful, the contained images need to be made readable and searchable. One step in that direction is document image binarization, the separation of text foreground from page background. This separation makes the text shown in the document images easier to process by humans and other image processing algorithms alike. While reasonably well working binarization algorithms exist, it is not sufficient to just being able to perform the separation of foreground and background well. This separation also has to be achieved in an efficient manner, in terms of execution time, but also in terms of training data used by machine learning based methods. This is necessary to make binarization not only theoretically possible, but also practically viable.

    In this thesis, we explore different ways to achieve efficient binarization in terms of execution time by improving the implementation and the algorithm of a state-of-the-art binarization method. We find that parameter prediction, as well as mapping the algorithm onto the graphics processing unit (GPU) help to improve its execution performance. Furthermore, we propose a binarization algorithm based on recurrent neural networks and evaluate the choice of its design parameters with respect to their impact on execution time and binarization quality. Here, we identify a trade-off between binarization quality and execution performance based on the algorithm’s footprint size and show that dynamically weighted training loss tends to improve the binarization quality. Lastly, we address the problem of training data efficiency by evaluating the use of interactive machine learning for reducing the required amount of training data for our recurrent neural network based method. We show that user feedback can help to achieve better binarization quality with less training data and that visualized uncertainty helps to guide users to give more relevant feedback.

    Download full text (pdf)
    fulltext
  • 29.
    Yavariabdi, Amir
    et al.
    Karatay Üniversitesi, TUR.
    Kusetogullari, Hüseyin
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Mendi, Engin
    Karatay Üniversitesi, TUR.
    Karabatak, Begum
    Turkcell, Nicosia, CYP.
    Unsupervised Change Detection using Thin Cloud-Contaminated Landsat Images2018In: 9th International Conference on Intelligent Systems 2018: Theory, Research and Innovation in Applications, IS 2018 - Proceedings / [ed] JardimGoncalves, R; Mendonca, JP; Jotsov, V; Marques, M; Martins, J; Bierwolf, R, Institute of Electrical and Electronics Engineers Inc. , 2018, p. 21-25Conference paper (Refereed)
    Abstract [en]

    In this paper, a novel unsupervised change detection method is proposed to automatically detect changes between two cloud-contaminated Landsat images. To achieve this, firstly, a photometric invariants technique with Stationary Wavelet Transform (SWT) are applied to input images to decrease the influence of cloud and noise artifacts in the change detection process. Then, mean shift image filtering is employed on the sub-band difference images, generated via image differencing technique, to smooth the images. Next, multiple binary change detection masks are obtained by partitioning the pixels in each of the smoothed sub-band difference images into two clusters using Fuzzy c-means (FCM). Finally, the binary masks are fused using Markov Random Field (MRF) to generate the final solution. Experiments on both semi-simulated and real data sets show the effectiveness and robustness of the proposed change detection method in noisy and cloud-contaminated Landsat images. © 2018 IEEE.

  • 30.
    Åsbrink, Anton
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Andersson, Jacob
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Game-Agnostic Asset Loading Order Using Static Code Analysis2022Independent thesis Advanced level (professional degree), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Background. User retention is important in the online sphere, especially within gaming. Utilising browser gaming websites to host games helps smaller studios and solo developers reach out to a larger audience. However, displaying games on the website does not guarantee the user will try the game out and if the load time is long, the player could potentially move on. Using game agnostic, static code analysis, a potential load order can be created, prioritising assets required to start the game to be downloaded first, resulting in shorter wait times for the player to start playing.

    Objectives. The thesis aim is to develop a game agnostic parser, able to a list all the assets within a given Godot engine based game and sort them according to importance. The order of importance is the assets required for the game to be playable is placed first, followed by each sequential set of assets for each sequential scene.

    Methods. Static code analysis is in this project done by parsing through all the files and code of a given game. By then using numerous regular expressions one can extract relevant data such as references to assets and scene changes. The assets are then associated with different scenes that are ordered and distinguished by scene changes.

    Results. The results vary from making no difference to potentially taking 31% of the original loading time. With graphs being generated for every game showing the scenes and their ordering through the parsing process giving information into the process of the game as well as the reasons for the potential speedup or the lack of it.

    Conclusions. The project shows promising results for games that can be successfully parsed and have the scene structure to gain from it. Further work and development is required for a more comprehensive solution with suggested methods. With these results being largely theoretical a more practical study would be needed to apply the results to a realistic setting.

    Download full text (pdf)
    fulltext
1 - 30 of 30
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf